00:00:00.023 Started by upstream project "autotest-spdk-v24.05-vs-dpdk-v22.11" build number 109 00:00:00.024 originally caused by: 00:00:00.024 Started by upstream project "nightly-trigger" build number 3287 00:00:00.024 originally caused by: 00:00:00.024 Started by timer 00:00:00.131 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.132 The recommended git tool is: git 00:00:00.132 using credential 00000000-0000-0000-0000-000000000002 00:00:00.134 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.196 Fetching changes from the remote Git repository 00:00:00.198 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.239 Using shallow fetch with depth 1 00:00:00.239 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.239 > git --version # timeout=10 00:00:00.271 > git --version # 'git version 2.39.2' 00:00:00.271 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.294 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.294 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.989 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.001 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.015 Checking out Revision f7830e7c5d95762fb88ef73dac888ff5050122c9 (FETCH_HEAD) 00:00:06.015 > git config core.sparsecheckout # timeout=10 00:00:06.027 > git read-tree -mu HEAD # timeout=10 00:00:06.048 > git checkout -f f7830e7c5d95762fb88ef73dac888ff5050122c9 # timeout=5 00:00:06.072 Commit message: "doc: update AC01 PDU information" 00:00:06.072 > git rev-list --no-walk f7830e7c5d95762fb88ef73dac888ff5050122c9 # timeout=10 00:00:06.262 [Pipeline] Start of Pipeline 00:00:06.282 [Pipeline] library 00:00:06.283 Loading library shm_lib@master 00:00:06.284 Library shm_lib@master is cached. Copying from home. 00:00:06.321 [Pipeline] node 00:00:21.323 Still waiting to schedule task 00:00:21.323 Waiting for next available executor on ‘vagrant-vm-host’ 00:26:35.977 Running on VM-host-WFP7 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:26:35.979 [Pipeline] { 00:26:35.992 [Pipeline] catchError 00:26:35.994 [Pipeline] { 00:26:36.008 [Pipeline] wrap 00:26:36.017 [Pipeline] { 00:26:36.026 [Pipeline] stage 00:26:36.028 [Pipeline] { (Prologue) 00:26:36.049 [Pipeline] echo 00:26:36.053 Node: VM-host-WFP7 00:26:36.080 [Pipeline] cleanWs 00:26:36.092 [WS-CLEANUP] Deleting project workspace... 00:26:36.092 [WS-CLEANUP] Deferred wipeout is used... 00:26:36.098 [WS-CLEANUP] done 00:26:36.353 [Pipeline] setCustomBuildProperty 00:26:36.456 [Pipeline] httpRequest 00:26:36.481 [Pipeline] echo 00:26:36.483 Sorcerer 10.211.164.101 is alive 00:26:36.493 [Pipeline] httpRequest 00:26:36.498 HttpMethod: GET 00:26:36.499 URL: http://10.211.164.101/packages/jbp_f7830e7c5d95762fb88ef73dac888ff5050122c9.tar.gz 00:26:36.499 Sending request to url: http://10.211.164.101/packages/jbp_f7830e7c5d95762fb88ef73dac888ff5050122c9.tar.gz 00:26:36.502 Response Code: HTTP/1.1 200 OK 00:26:36.502 Success: Status code 200 is in the accepted range: 200,404 00:26:36.503 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_f7830e7c5d95762fb88ef73dac888ff5050122c9.tar.gz 00:26:36.648 [Pipeline] sh 00:26:36.934 + tar --no-same-owner -xf jbp_f7830e7c5d95762fb88ef73dac888ff5050122c9.tar.gz 00:26:36.951 [Pipeline] httpRequest 00:26:36.969 [Pipeline] echo 00:26:36.971 Sorcerer 10.211.164.101 is alive 00:26:36.981 [Pipeline] httpRequest 00:26:36.986 HttpMethod: GET 00:26:36.987 URL: http://10.211.164.101/packages/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:26:36.988 Sending request to url: http://10.211.164.101/packages/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:26:36.990 Response Code: HTTP/1.1 200 OK 00:26:36.991 Success: Status code 200 is in the accepted range: 200,404 00:26:36.991 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:26:39.160 [Pipeline] sh 00:26:39.443 + tar --no-same-owner -xf spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:26:42.049 [Pipeline] sh 00:26:42.330 + git -C spdk log --oneline -n5 00:26:42.330 5fa2f5086 nvme: add lock_depth for ctrlr_lock 00:26:42.330 330a4f94d nvme: check pthread_mutex_destroy() return value 00:26:42.330 7b72c3ced nvme: add nvme_ctrlr_lock 00:26:42.330 fc7a37019 nvme: always use nvme_robust_mutex_lock for ctrlr_lock 00:26:42.330 3e04ecdd1 bdev_nvme: use spdk_nvme_ctrlr_fail() on ctrlr_loss_timeout 00:26:42.351 [Pipeline] withCredentials 00:26:42.362 > git --version # timeout=10 00:26:42.375 > git --version # 'git version 2.39.2' 00:26:42.393 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:26:42.395 [Pipeline] { 00:26:42.405 [Pipeline] retry 00:26:42.407 [Pipeline] { 00:26:42.424 [Pipeline] sh 00:26:42.709 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:26:45.253 [Pipeline] } 00:26:45.275 [Pipeline] // retry 00:26:45.280 [Pipeline] } 00:26:45.301 [Pipeline] // withCredentials 00:26:45.311 [Pipeline] httpRequest 00:26:45.327 [Pipeline] echo 00:26:45.328 Sorcerer 10.211.164.101 is alive 00:26:45.338 [Pipeline] httpRequest 00:26:45.343 HttpMethod: GET 00:26:45.343 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:26:45.344 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:26:45.346 Response Code: HTTP/1.1 200 OK 00:26:45.347 Success: Status code 200 is in the accepted range: 200,404 00:26:45.347 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:26:46.579 [Pipeline] sh 00:26:46.868 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:26:48.255 [Pipeline] sh 00:26:48.543 + git -C dpdk log --oneline -n5 00:26:48.543 caf0f5d395 version: 22.11.4 00:26:48.543 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:26:48.543 dc9c799c7d vhost: fix missing spinlock unlock 00:26:48.543 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:26:48.543 6ef77f2a5e net/gve: fix RX buffer size alignment 00:26:48.594 [Pipeline] writeFile 00:26:48.611 [Pipeline] sh 00:26:48.894 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:26:48.905 [Pipeline] sh 00:26:49.185 + cat autorun-spdk.conf 00:26:49.185 SPDK_RUN_FUNCTIONAL_TEST=1 00:26:49.185 SPDK_TEST_NVMF=1 00:26:49.185 SPDK_TEST_NVMF_TRANSPORT=tcp 00:26:49.185 SPDK_TEST_USDT=1 00:26:49.185 SPDK_RUN_UBSAN=1 00:26:49.186 SPDK_TEST_NVMF_MDNS=1 00:26:49.186 NET_TYPE=virt 00:26:49.186 SPDK_JSONRPC_GO_CLIENT=1 00:26:49.186 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:26:49.186 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:26:49.186 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:26:49.191 RUN_NIGHTLY=1 00:26:49.194 [Pipeline] } 00:26:49.212 [Pipeline] // stage 00:26:49.230 [Pipeline] stage 00:26:49.232 [Pipeline] { (Run VM) 00:26:49.247 [Pipeline] sh 00:26:49.528 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:26:49.528 + echo 'Start stage prepare_nvme.sh' 00:26:49.528 Start stage prepare_nvme.sh 00:26:49.528 + [[ -n 4 ]] 00:26:49.528 + disk_prefix=ex4 00:26:49.528 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:26:49.528 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:26:49.528 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:26:49.528 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:26:49.528 ++ SPDK_TEST_NVMF=1 00:26:49.528 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:26:49.528 ++ SPDK_TEST_USDT=1 00:26:49.528 ++ SPDK_RUN_UBSAN=1 00:26:49.528 ++ SPDK_TEST_NVMF_MDNS=1 00:26:49.528 ++ NET_TYPE=virt 00:26:49.528 ++ SPDK_JSONRPC_GO_CLIENT=1 00:26:49.528 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:26:49.528 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:26:49.528 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:26:49.528 ++ RUN_NIGHTLY=1 00:26:49.528 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:26:49.528 + nvme_files=() 00:26:49.528 + declare -A nvme_files 00:26:49.528 + backend_dir=/var/lib/libvirt/images/backends 00:26:49.528 + nvme_files['nvme.img']=5G 00:26:49.528 + nvme_files['nvme-cmb.img']=5G 00:26:49.528 + nvme_files['nvme-multi0.img']=4G 00:26:49.528 + nvme_files['nvme-multi1.img']=4G 00:26:49.528 + nvme_files['nvme-multi2.img']=4G 00:26:49.528 + nvme_files['nvme-openstack.img']=8G 00:26:49.528 + nvme_files['nvme-zns.img']=5G 00:26:49.528 + (( SPDK_TEST_NVME_PMR == 1 )) 00:26:49.528 + (( SPDK_TEST_FTL == 1 )) 00:26:49.528 + (( SPDK_TEST_NVME_FDP == 1 )) 00:26:49.528 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:26:49.528 + for nvme in "${!nvme_files[@]}" 00:26:49.528 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:26:49.528 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:26:49.528 + for nvme in "${!nvme_files[@]}" 00:26:49.528 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:26:49.528 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:26:49.528 + for nvme in "${!nvme_files[@]}" 00:26:49.528 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:26:49.528 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:26:49.528 + for nvme in "${!nvme_files[@]}" 00:26:49.528 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:26:49.528 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:26:49.528 + for nvme in "${!nvme_files[@]}" 00:26:49.528 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:26:49.528 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:26:49.528 + for nvme in "${!nvme_files[@]}" 00:26:49.528 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:26:49.787 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:26:49.787 + for nvme in "${!nvme_files[@]}" 00:26:49.787 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:26:50.353 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:26:50.353 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:26:50.353 + echo 'End stage prepare_nvme.sh' 00:26:50.353 End stage prepare_nvme.sh 00:26:50.364 [Pipeline] sh 00:26:50.646 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:26:50.646 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -H -a -v -f fedora38 00:26:50.646 00:26:50.646 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:26:50.646 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:26:50.646 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:26:50.646 HELP=0 00:26:50.646 DRY_RUN=0 00:26:50.646 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img, 00:26:50.646 NVME_DISKS_TYPE=nvme,nvme, 00:26:50.646 NVME_AUTO_CREATE=0 00:26:50.646 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img, 00:26:50.646 NVME_CMB=,, 00:26:50.646 NVME_PMR=,, 00:26:50.646 NVME_ZNS=,, 00:26:50.646 NVME_MS=,, 00:26:50.646 NVME_FDP=,, 00:26:50.646 SPDK_VAGRANT_DISTRO=fedora38 00:26:50.646 SPDK_VAGRANT_VMCPU=10 00:26:50.646 SPDK_VAGRANT_VMRAM=12288 00:26:50.646 SPDK_VAGRANT_PROVIDER=libvirt 00:26:50.646 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:26:50.646 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:26:50.646 SPDK_OPENSTACK_NETWORK=0 00:26:50.646 VAGRANT_PACKAGE_BOX=0 00:26:50.646 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:26:50.646 FORCE_DISTRO=true 00:26:50.646 VAGRANT_BOX_VERSION= 00:26:50.646 EXTRA_VAGRANTFILES= 00:26:50.646 NIC_MODEL=virtio 00:26:50.646 00:26:50.646 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt' 00:26:50.646 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:26:53.178 Bringing machine 'default' up with 'libvirt' provider... 00:26:53.747 ==> default: Creating image (snapshot of base box volume). 00:26:53.747 ==> default: Creating domain with the following settings... 00:26:53.747 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721659212_435b73917ffc842bb3db 00:26:53.747 ==> default: -- Domain type: kvm 00:26:53.747 ==> default: -- Cpus: 10 00:26:53.747 ==> default: -- Feature: acpi 00:26:53.747 ==> default: -- Feature: apic 00:26:53.747 ==> default: -- Feature: pae 00:26:53.747 ==> default: -- Memory: 12288M 00:26:53.747 ==> default: -- Memory Backing: hugepages: 00:26:53.747 ==> default: -- Management MAC: 00:26:53.747 ==> default: -- Loader: 00:26:53.747 ==> default: -- Nvram: 00:26:53.747 ==> default: -- Base box: spdk/fedora38 00:26:53.747 ==> default: -- Storage pool: default 00:26:53.747 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721659212_435b73917ffc842bb3db.img (20G) 00:26:53.747 ==> default: -- Volume Cache: default 00:26:53.747 ==> default: -- Kernel: 00:26:53.747 ==> default: -- Initrd: 00:26:53.747 ==> default: -- Graphics Type: vnc 00:26:53.747 ==> default: -- Graphics Port: -1 00:26:53.747 ==> default: -- Graphics IP: 127.0.0.1 00:26:53.747 ==> default: -- Graphics Password: Not defined 00:26:53.747 ==> default: -- Video Type: cirrus 00:26:53.747 ==> default: -- Video VRAM: 9216 00:26:53.747 ==> default: -- Sound Type: 00:26:53.747 ==> default: -- Keymap: en-us 00:26:53.747 ==> default: -- TPM Path: 00:26:53.747 ==> default: -- INPUT: type=mouse, bus=ps2 00:26:53.747 ==> default: -- Command line args: 00:26:53.747 ==> default: -> value=-device, 00:26:53.747 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:26:53.747 ==> default: -> value=-drive, 00:26:53.747 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:26:53.747 ==> default: -> value=-device, 00:26:53.747 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:26:53.747 ==> default: -> value=-device, 00:26:53.747 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:26:53.747 ==> default: -> value=-drive, 00:26:53.747 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:26:53.747 ==> default: -> value=-device, 00:26:53.747 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:26:53.747 ==> default: -> value=-drive, 00:26:53.747 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:26:53.747 ==> default: -> value=-device, 00:26:53.747 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:26:53.747 ==> default: -> value=-drive, 00:26:53.748 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:26:53.748 ==> default: -> value=-device, 00:26:53.748 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:26:54.007 ==> default: Creating shared folders metadata... 00:26:54.007 ==> default: Starting domain. 00:26:55.385 ==> default: Waiting for domain to get an IP address... 00:27:13.530 ==> default: Waiting for SSH to become available... 00:27:13.530 ==> default: Configuring and enabling network interfaces... 00:27:18.800 default: SSH address: 192.168.121.148:22 00:27:18.800 default: SSH username: vagrant 00:27:18.800 default: SSH auth method: private key 00:27:20.708 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:27:27.276 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:27:33.837 ==> default: Mounting SSHFS shared folder... 00:27:35.215 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:27:35.215 ==> default: Checking Mount.. 00:27:37.121 ==> default: Folder Successfully Mounted! 00:27:37.121 ==> default: Running provisioner: file... 00:27:37.690 default: ~/.gitconfig => .gitconfig 00:27:38.257 00:27:38.257 SUCCESS! 00:27:38.257 00:27:38.257 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:27:38.257 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:27:38.257 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:27:38.257 00:27:38.267 [Pipeline] } 00:27:38.285 [Pipeline] // stage 00:27:38.295 [Pipeline] dir 00:27:38.295 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt 00:27:38.297 [Pipeline] { 00:27:38.311 [Pipeline] catchError 00:27:38.313 [Pipeline] { 00:27:38.326 [Pipeline] sh 00:27:38.609 + vagrant ssh-config --host vagrant 00:27:38.609 + sed -ne /^Host/,$p 00:27:38.609 + tee ssh_conf 00:27:41.900 Host vagrant 00:27:41.900 HostName 192.168.121.148 00:27:41.900 User vagrant 00:27:41.900 Port 22 00:27:41.900 UserKnownHostsFile /dev/null 00:27:41.900 StrictHostKeyChecking no 00:27:41.900 PasswordAuthentication no 00:27:41.900 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:27:41.900 IdentitiesOnly yes 00:27:41.900 LogLevel FATAL 00:27:41.900 ForwardAgent yes 00:27:41.900 ForwardX11 yes 00:27:41.900 00:27:41.917 [Pipeline] withEnv 00:27:41.920 [Pipeline] { 00:27:41.935 [Pipeline] sh 00:27:42.243 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:27:42.243 source /etc/os-release 00:27:42.243 [[ -e /image.version ]] && img=$(< /image.version) 00:27:42.243 # Minimal, systemd-like check. 00:27:42.243 if [[ -e /.dockerenv ]]; then 00:27:42.243 # Clear garbage from the node's name: 00:27:42.243 # agt-er_autotest_547-896 -> autotest_547-896 00:27:42.243 # $HOSTNAME is the actual container id 00:27:42.243 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:27:42.243 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:27:42.243 # We can assume this is a mount from a host where container is running, 00:27:42.243 # so fetch its hostname to easily identify the target swarm worker. 00:27:42.243 container="$(< /etc/hostname) ($agent)" 00:27:42.243 else 00:27:42.243 # Fallback 00:27:42.243 container=$agent 00:27:42.243 fi 00:27:42.243 fi 00:27:42.243 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:27:42.243 00:27:42.512 [Pipeline] } 00:27:42.532 [Pipeline] // withEnv 00:27:42.540 [Pipeline] setCustomBuildProperty 00:27:42.556 [Pipeline] stage 00:27:42.558 [Pipeline] { (Tests) 00:27:42.577 [Pipeline] sh 00:27:42.858 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:27:43.129 [Pipeline] sh 00:27:43.408 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:27:43.681 [Pipeline] timeout 00:27:43.681 Timeout set to expire in 40 min 00:27:43.683 [Pipeline] { 00:27:43.698 [Pipeline] sh 00:27:43.988 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:27:44.555 HEAD is now at 5fa2f5086 nvme: add lock_depth for ctrlr_lock 00:27:44.568 [Pipeline] sh 00:27:44.848 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:27:45.118 [Pipeline] sh 00:27:45.397 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:27:45.671 [Pipeline] sh 00:27:45.949 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:27:46.207 ++ readlink -f spdk_repo 00:27:46.207 + DIR_ROOT=/home/vagrant/spdk_repo 00:27:46.207 + [[ -n /home/vagrant/spdk_repo ]] 00:27:46.207 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:27:46.207 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:27:46.207 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:27:46.207 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:27:46.207 + [[ -d /home/vagrant/spdk_repo/output ]] 00:27:46.207 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:27:46.207 + cd /home/vagrant/spdk_repo 00:27:46.207 + source /etc/os-release 00:27:46.207 ++ NAME='Fedora Linux' 00:27:46.207 ++ VERSION='38 (Cloud Edition)' 00:27:46.207 ++ ID=fedora 00:27:46.207 ++ VERSION_ID=38 00:27:46.207 ++ VERSION_CODENAME= 00:27:46.207 ++ PLATFORM_ID=platform:f38 00:27:46.207 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:27:46.207 ++ ANSI_COLOR='0;38;2;60;110;180' 00:27:46.207 ++ LOGO=fedora-logo-icon 00:27:46.207 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:27:46.207 ++ HOME_URL=https://fedoraproject.org/ 00:27:46.207 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:27:46.207 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:27:46.207 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:27:46.207 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:27:46.207 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:27:46.207 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:27:46.207 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:27:46.207 ++ SUPPORT_END=2024-05-14 00:27:46.207 ++ VARIANT='Cloud Edition' 00:27:46.207 ++ VARIANT_ID=cloud 00:27:46.207 + uname -a 00:27:46.207 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:27:46.207 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:27:46.770 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:46.770 Hugepages 00:27:46.770 node hugesize free / total 00:27:46.770 node0 1048576kB 0 / 0 00:27:46.770 node0 2048kB 0 / 0 00:27:46.770 00:27:46.770 Type BDF Vendor Device NUMA Driver Device Block devices 00:27:46.770 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:27:46.770 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:27:46.770 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:27:46.770 + rm -f /tmp/spdk-ld-path 00:27:46.770 + source autorun-spdk.conf 00:27:46.770 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:27:46.770 ++ SPDK_TEST_NVMF=1 00:27:46.770 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:27:46.770 ++ SPDK_TEST_USDT=1 00:27:46.770 ++ SPDK_RUN_UBSAN=1 00:27:46.770 ++ SPDK_TEST_NVMF_MDNS=1 00:27:46.770 ++ NET_TYPE=virt 00:27:46.770 ++ SPDK_JSONRPC_GO_CLIENT=1 00:27:46.770 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:27:46.770 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:27:46.770 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:27:46.770 ++ RUN_NIGHTLY=1 00:27:46.770 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:27:46.770 + [[ -n '' ]] 00:27:46.770 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:27:46.770 + for M in /var/spdk/build-*-manifest.txt 00:27:46.770 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:27:46.770 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:27:46.770 + for M in /var/spdk/build-*-manifest.txt 00:27:46.770 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:27:46.770 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:27:46.770 ++ uname 00:27:46.770 + [[ Linux == \L\i\n\u\x ]] 00:27:46.770 + sudo dmesg -T 00:27:46.770 + sudo dmesg --clear 00:27:46.770 + dmesg_pid=6069 00:27:46.770 + sudo dmesg -Tw 00:27:46.770 + [[ Fedora Linux == FreeBSD ]] 00:27:46.770 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:27:46.770 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:27:46.770 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:27:46.770 + [[ -x /usr/src/fio-static/fio ]] 00:27:46.770 + export FIO_BIN=/usr/src/fio-static/fio 00:27:46.770 + FIO_BIN=/usr/src/fio-static/fio 00:27:46.770 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:27:46.770 + [[ ! -v VFIO_QEMU_BIN ]] 00:27:46.770 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:27:46.770 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:27:46.770 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:27:46.770 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:27:46.770 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:27:46.770 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:27:46.770 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:27:47.029 Test configuration: 00:27:47.029 SPDK_RUN_FUNCTIONAL_TEST=1 00:27:47.029 SPDK_TEST_NVMF=1 00:27:47.029 SPDK_TEST_NVMF_TRANSPORT=tcp 00:27:47.029 SPDK_TEST_USDT=1 00:27:47.029 SPDK_RUN_UBSAN=1 00:27:47.029 SPDK_TEST_NVMF_MDNS=1 00:27:47.029 NET_TYPE=virt 00:27:47.029 SPDK_JSONRPC_GO_CLIENT=1 00:27:47.029 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:27:47.029 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:27:47.029 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:27:47.029 RUN_NIGHTLY=1 14:41:06 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:47.029 14:41:06 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:27:47.029 14:41:06 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:47.029 14:41:06 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:47.029 14:41:06 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.029 14:41:06 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.029 14:41:06 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.029 14:41:06 -- paths/export.sh@5 -- $ export PATH 00:27:47.029 14:41:06 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.029 14:41:06 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:27:47.029 14:41:06 -- common/autobuild_common.sh@437 -- $ date +%s 00:27:47.029 14:41:06 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1721659266.XXXXXX 00:27:47.029 14:41:06 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1721659266.zJjtPH 00:27:47.029 14:41:06 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:27:47.029 14:41:06 -- common/autobuild_common.sh@443 -- $ '[' -n v22.11.4 ']' 00:27:47.029 14:41:06 -- common/autobuild_common.sh@444 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:27:47.029 14:41:06 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:27:47.029 14:41:06 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:27:47.029 14:41:06 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:27:47.029 14:41:06 -- common/autobuild_common.sh@453 -- $ get_config_params 00:27:47.029 14:41:06 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:27:47.029 14:41:06 -- common/autotest_common.sh@10 -- $ set +x 00:27:47.029 14:41:06 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:27:47.029 14:41:06 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:27:47.029 14:41:06 -- pm/common@17 -- $ local monitor 00:27:47.029 14:41:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:47.029 14:41:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:47.029 14:41:06 -- pm/common@21 -- $ date +%s 00:27:47.029 14:41:06 -- pm/common@25 -- $ sleep 1 00:27:47.029 14:41:06 -- pm/common@21 -- $ date +%s 00:27:47.029 14:41:06 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721659266 00:27:47.029 14:41:06 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721659266 00:27:47.029 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721659266_collect-vmstat.pm.log 00:27:47.029 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721659266_collect-cpu-load.pm.log 00:27:47.962 14:41:07 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:27:47.962 14:41:07 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:27:47.962 14:41:07 -- spdk/autobuild.sh@12 -- $ umask 022 00:27:47.962 14:41:07 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:27:47.962 14:41:07 -- spdk/autobuild.sh@16 -- $ date -u 00:27:47.962 Mon Jul 22 02:41:07 PM UTC 2024 00:27:47.962 14:41:07 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:27:47.962 v24.05-13-g5fa2f5086 00:27:47.962 14:41:07 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:27:47.962 14:41:07 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:27:47.962 14:41:07 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:27:47.962 14:41:07 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:27:47.962 14:41:07 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:27:47.962 14:41:07 -- common/autotest_common.sh@10 -- $ set +x 00:27:48.220 ************************************ 00:27:48.220 START TEST ubsan 00:27:48.220 ************************************ 00:27:48.220 using ubsan 00:27:48.220 14:41:07 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:27:48.220 00:27:48.220 real 0m0.000s 00:27:48.220 user 0m0.000s 00:27:48.220 sys 0m0.000s 00:27:48.220 14:41:07 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:27:48.220 14:41:07 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:27:48.220 ************************************ 00:27:48.220 END TEST ubsan 00:27:48.220 ************************************ 00:27:48.220 14:41:07 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:27:48.220 14:41:07 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:27:48.220 14:41:07 -- common/autobuild_common.sh@429 -- $ run_test build_native_dpdk _build_native_dpdk 00:27:48.220 14:41:07 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:27:48.220 14:41:07 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:27:48.220 14:41:07 -- common/autotest_common.sh@10 -- $ set +x 00:27:48.220 ************************************ 00:27:48.220 START TEST build_native_dpdk 00:27:48.220 ************************************ 00:27:48.220 14:41:07 build_native_dpdk -- common/autotest_common.sh@1121 -- $ _build_native_dpdk 00:27:48.220 14:41:07 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:27:48.220 14:41:07 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:27:48.220 14:41:07 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:27:48.220 14:41:07 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:27:48.220 14:41:07 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:27:48.220 14:41:07 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:27:48.220 14:41:07 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:27:48.220 14:41:07 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:27:48.220 14:41:07 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:27:48.220 14:41:07 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:27:48.220 14:41:07 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:27:48.220 14:41:07 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:27:48.220 14:41:07 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:27:48.220 14:41:07 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:27:48.220 14:41:07 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:27:48.220 14:41:07 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:27:48.220 14:41:07 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:27:48.220 14:41:07 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:27:48.220 14:41:07 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:27:48.220 14:41:07 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:27:48.220 caf0f5d395 version: 22.11.4 00:27:48.220 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:27:48.220 dc9c799c7d vhost: fix missing spinlock unlock 00:27:48.220 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:27:48.220 6ef77f2a5e net/gve: fix RX buffer size alignment 00:27:48.220 14:41:07 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:27:48.220 14:41:07 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:27:48.220 14:41:07 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:27:48.220 14:41:07 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:27:48.220 14:41:07 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:27:48.220 14:41:07 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:27:48.220 14:41:07 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:27:48.220 14:41:07 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:27:48.220 14:41:07 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:27:48.221 14:41:07 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:27:48.221 14:41:07 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:27:48.221 14:41:07 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:27:48.221 14:41:07 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:27:48.221 14:41:07 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:27:48.221 14:41:07 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:27:48.221 14:41:07 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:27:48.221 14:41:07 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:27:48.221 14:41:07 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:27:48.221 14:41:07 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:27:48.221 14:41:07 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:27:48.221 14:41:07 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:27:48.221 14:41:07 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:27:48.221 14:41:07 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:27:48.221 14:41:07 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:27:48.221 14:41:07 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:27:48.221 14:41:07 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:27:48.221 14:41:07 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:27:48.221 14:41:07 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:27:48.221 14:41:07 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:27:48.221 14:41:07 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:27:48.221 14:41:07 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:27:48.221 14:41:07 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:27:48.221 14:41:07 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:48.221 14:41:07 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:27:48.221 14:41:07 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:27:48.221 14:41:07 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:27:48.221 14:41:07 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:27:48.221 14:41:07 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:27:48.221 14:41:07 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:27:48.221 14:41:07 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:27:48.221 14:41:07 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:27:48.221 14:41:07 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:27:48.221 14:41:07 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:27:48.221 14:41:07 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:27:48.221 14:41:07 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:27:48.221 14:41:07 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:27:48.221 patching file config/rte_config.h 00:27:48.221 Hunk #1 succeeded at 60 (offset 1 line). 00:27:48.221 14:41:07 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:27:48.221 14:41:07 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:27:48.221 14:41:07 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:27:48.221 14:41:07 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:27:48.221 14:41:07 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:27:54.809 The Meson build system 00:27:54.809 Version: 1.3.1 00:27:54.809 Source dir: /home/vagrant/spdk_repo/dpdk 00:27:54.809 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:27:54.809 Build type: native build 00:27:54.809 Program cat found: YES (/usr/bin/cat) 00:27:54.809 Project name: DPDK 00:27:54.809 Project version: 22.11.4 00:27:54.809 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:27:54.809 C linker for the host machine: gcc ld.bfd 2.39-16 00:27:54.809 Host machine cpu family: x86_64 00:27:54.809 Host machine cpu: x86_64 00:27:54.809 Message: ## Building in Developer Mode ## 00:27:54.809 Program pkg-config found: YES (/usr/bin/pkg-config) 00:27:54.809 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:27:54.809 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:27:54.809 Program objdump found: YES (/usr/bin/objdump) 00:27:54.809 Program python3 found: YES (/usr/bin/python3) 00:27:54.809 Program cat found: YES (/usr/bin/cat) 00:27:54.809 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:27:54.809 Checking for size of "void *" : 8 00:27:54.809 Checking for size of "void *" : 8 (cached) 00:27:54.809 Library m found: YES 00:27:54.809 Library numa found: YES 00:27:54.809 Has header "numaif.h" : YES 00:27:54.809 Library fdt found: NO 00:27:54.809 Library execinfo found: NO 00:27:54.809 Has header "execinfo.h" : YES 00:27:54.809 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:27:54.809 Run-time dependency libarchive found: NO (tried pkgconfig) 00:27:54.809 Run-time dependency libbsd found: NO (tried pkgconfig) 00:27:54.809 Run-time dependency jansson found: NO (tried pkgconfig) 00:27:54.809 Run-time dependency openssl found: YES 3.0.9 00:27:54.809 Run-time dependency libpcap found: YES 1.10.4 00:27:54.809 Has header "pcap.h" with dependency libpcap: YES 00:27:54.809 Compiler for C supports arguments -Wcast-qual: YES 00:27:54.809 Compiler for C supports arguments -Wdeprecated: YES 00:27:54.809 Compiler for C supports arguments -Wformat: YES 00:27:54.809 Compiler for C supports arguments -Wformat-nonliteral: NO 00:27:54.809 Compiler for C supports arguments -Wformat-security: NO 00:27:54.809 Compiler for C supports arguments -Wmissing-declarations: YES 00:27:54.809 Compiler for C supports arguments -Wmissing-prototypes: YES 00:27:54.809 Compiler for C supports arguments -Wnested-externs: YES 00:27:54.809 Compiler for C supports arguments -Wold-style-definition: YES 00:27:54.809 Compiler for C supports arguments -Wpointer-arith: YES 00:27:54.809 Compiler for C supports arguments -Wsign-compare: YES 00:27:54.809 Compiler for C supports arguments -Wstrict-prototypes: YES 00:27:54.809 Compiler for C supports arguments -Wundef: YES 00:27:54.809 Compiler for C supports arguments -Wwrite-strings: YES 00:27:54.809 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:27:54.810 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:27:54.810 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:27:54.810 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:27:54.810 Compiler for C supports arguments -mavx512f: YES 00:27:54.810 Checking if "AVX512 checking" compiles: YES 00:27:54.810 Fetching value of define "__SSE4_2__" : 1 00:27:54.810 Fetching value of define "__AES__" : 1 00:27:54.810 Fetching value of define "__AVX__" : 1 00:27:54.810 Fetching value of define "__AVX2__" : 1 00:27:54.810 Fetching value of define "__AVX512BW__" : 1 00:27:54.810 Fetching value of define "__AVX512CD__" : 1 00:27:54.810 Fetching value of define "__AVX512DQ__" : 1 00:27:54.810 Fetching value of define "__AVX512F__" : 1 00:27:54.810 Fetching value of define "__AVX512VL__" : 1 00:27:54.810 Fetching value of define "__PCLMUL__" : 1 00:27:54.810 Fetching value of define "__RDRND__" : 1 00:27:54.810 Fetching value of define "__RDSEED__" : 1 00:27:54.810 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:27:54.810 Compiler for C supports arguments -Wno-format-truncation: YES 00:27:54.810 Message: lib/kvargs: Defining dependency "kvargs" 00:27:54.810 Message: lib/telemetry: Defining dependency "telemetry" 00:27:54.810 Checking for function "getentropy" : YES 00:27:54.810 Message: lib/eal: Defining dependency "eal" 00:27:54.810 Message: lib/ring: Defining dependency "ring" 00:27:54.810 Message: lib/rcu: Defining dependency "rcu" 00:27:54.810 Message: lib/mempool: Defining dependency "mempool" 00:27:54.810 Message: lib/mbuf: Defining dependency "mbuf" 00:27:54.810 Fetching value of define "__PCLMUL__" : 1 (cached) 00:27:54.810 Fetching value of define "__AVX512F__" : 1 (cached) 00:27:54.810 Fetching value of define "__AVX512BW__" : 1 (cached) 00:27:54.810 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:27:54.810 Fetching value of define "__AVX512VL__" : 1 (cached) 00:27:54.810 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:27:54.810 Compiler for C supports arguments -mpclmul: YES 00:27:54.810 Compiler for C supports arguments -maes: YES 00:27:54.810 Compiler for C supports arguments -mavx512f: YES (cached) 00:27:54.810 Compiler for C supports arguments -mavx512bw: YES 00:27:54.810 Compiler for C supports arguments -mavx512dq: YES 00:27:54.810 Compiler for C supports arguments -mavx512vl: YES 00:27:54.810 Compiler for C supports arguments -mvpclmulqdq: YES 00:27:54.810 Compiler for C supports arguments -mavx2: YES 00:27:54.810 Compiler for C supports arguments -mavx: YES 00:27:54.810 Message: lib/net: Defining dependency "net" 00:27:54.810 Message: lib/meter: Defining dependency "meter" 00:27:54.810 Message: lib/ethdev: Defining dependency "ethdev" 00:27:54.810 Message: lib/pci: Defining dependency "pci" 00:27:54.810 Message: lib/cmdline: Defining dependency "cmdline" 00:27:54.810 Message: lib/metrics: Defining dependency "metrics" 00:27:54.810 Message: lib/hash: Defining dependency "hash" 00:27:54.810 Message: lib/timer: Defining dependency "timer" 00:27:54.810 Fetching value of define "__AVX2__" : 1 (cached) 00:27:54.810 Fetching value of define "__AVX512F__" : 1 (cached) 00:27:54.810 Fetching value of define "__AVX512VL__" : 1 (cached) 00:27:54.810 Fetching value of define "__AVX512CD__" : 1 (cached) 00:27:54.810 Fetching value of define "__AVX512BW__" : 1 (cached) 00:27:54.810 Message: lib/acl: Defining dependency "acl" 00:27:54.810 Message: lib/bbdev: Defining dependency "bbdev" 00:27:54.810 Message: lib/bitratestats: Defining dependency "bitratestats" 00:27:54.810 Run-time dependency libelf found: YES 0.190 00:27:54.810 Message: lib/bpf: Defining dependency "bpf" 00:27:54.810 Message: lib/cfgfile: Defining dependency "cfgfile" 00:27:54.810 Message: lib/compressdev: Defining dependency "compressdev" 00:27:54.810 Message: lib/cryptodev: Defining dependency "cryptodev" 00:27:54.810 Message: lib/distributor: Defining dependency "distributor" 00:27:54.810 Message: lib/efd: Defining dependency "efd" 00:27:54.810 Message: lib/eventdev: Defining dependency "eventdev" 00:27:54.810 Message: lib/gpudev: Defining dependency "gpudev" 00:27:54.810 Message: lib/gro: Defining dependency "gro" 00:27:54.810 Message: lib/gso: Defining dependency "gso" 00:27:54.810 Message: lib/ip_frag: Defining dependency "ip_frag" 00:27:54.810 Message: lib/jobstats: Defining dependency "jobstats" 00:27:54.810 Message: lib/latencystats: Defining dependency "latencystats" 00:27:54.810 Message: lib/lpm: Defining dependency "lpm" 00:27:54.810 Fetching value of define "__AVX512F__" : 1 (cached) 00:27:54.810 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:27:54.810 Fetching value of define "__AVX512IFMA__" : (undefined) 00:27:54.810 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:27:54.810 Message: lib/member: Defining dependency "member" 00:27:54.810 Message: lib/pcapng: Defining dependency "pcapng" 00:27:54.810 Compiler for C supports arguments -Wno-cast-qual: YES 00:27:54.810 Message: lib/power: Defining dependency "power" 00:27:54.810 Message: lib/rawdev: Defining dependency "rawdev" 00:27:54.810 Message: lib/regexdev: Defining dependency "regexdev" 00:27:54.810 Message: lib/dmadev: Defining dependency "dmadev" 00:27:54.810 Message: lib/rib: Defining dependency "rib" 00:27:54.810 Message: lib/reorder: Defining dependency "reorder" 00:27:54.810 Message: lib/sched: Defining dependency "sched" 00:27:54.810 Message: lib/security: Defining dependency "security" 00:27:54.810 Message: lib/stack: Defining dependency "stack" 00:27:54.810 Has header "linux/userfaultfd.h" : YES 00:27:54.810 Message: lib/vhost: Defining dependency "vhost" 00:27:54.810 Message: lib/ipsec: Defining dependency "ipsec" 00:27:54.810 Fetching value of define "__AVX512F__" : 1 (cached) 00:27:54.810 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:27:54.810 Fetching value of define "__AVX512BW__" : 1 (cached) 00:27:54.810 Message: lib/fib: Defining dependency "fib" 00:27:54.810 Message: lib/port: Defining dependency "port" 00:27:54.810 Message: lib/pdump: Defining dependency "pdump" 00:27:54.810 Message: lib/table: Defining dependency "table" 00:27:54.810 Message: lib/pipeline: Defining dependency "pipeline" 00:27:54.810 Message: lib/graph: Defining dependency "graph" 00:27:54.810 Message: lib/node: Defining dependency "node" 00:27:54.810 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:27:54.810 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:27:54.810 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:27:54.810 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:27:54.810 Compiler for C supports arguments -Wno-sign-compare: YES 00:27:54.810 Compiler for C supports arguments -Wno-unused-value: YES 00:27:54.810 Compiler for C supports arguments -Wno-format: YES 00:27:54.810 Compiler for C supports arguments -Wno-format-security: YES 00:27:54.810 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:27:54.810 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:27:55.069 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:27:55.069 Compiler for C supports arguments -Wno-unused-parameter: YES 00:27:55.069 Fetching value of define "__AVX2__" : 1 (cached) 00:27:55.069 Fetching value of define "__AVX512F__" : 1 (cached) 00:27:55.069 Fetching value of define "__AVX512BW__" : 1 (cached) 00:27:55.069 Compiler for C supports arguments -mavx512f: YES (cached) 00:27:55.069 Compiler for C supports arguments -mavx512bw: YES (cached) 00:27:55.069 Compiler for C supports arguments -march=skylake-avx512: YES 00:27:55.069 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:27:55.069 Program doxygen found: YES (/usr/bin/doxygen) 00:27:55.069 Configuring doxy-api.conf using configuration 00:27:55.069 Program sphinx-build found: NO 00:27:55.069 Configuring rte_build_config.h using configuration 00:27:55.069 Message: 00:27:55.069 ================= 00:27:55.069 Applications Enabled 00:27:55.069 ================= 00:27:55.069 00:27:55.069 apps: 00:27:55.069 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:27:55.069 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:27:55.069 test-security-perf, 00:27:55.069 00:27:55.069 Message: 00:27:55.069 ================= 00:27:55.069 Libraries Enabled 00:27:55.069 ================= 00:27:55.069 00:27:55.069 libs: 00:27:55.069 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:27:55.069 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:27:55.069 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:27:55.069 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:27:55.069 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:27:55.069 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:27:55.069 table, pipeline, graph, node, 00:27:55.069 00:27:55.069 Message: 00:27:55.069 =============== 00:27:55.069 Drivers Enabled 00:27:55.069 =============== 00:27:55.069 00:27:55.069 common: 00:27:55.069 00:27:55.069 bus: 00:27:55.069 pci, vdev, 00:27:55.069 mempool: 00:27:55.069 ring, 00:27:55.069 dma: 00:27:55.069 00:27:55.069 net: 00:27:55.069 i40e, 00:27:55.069 raw: 00:27:55.069 00:27:55.069 crypto: 00:27:55.069 00:27:55.069 compress: 00:27:55.069 00:27:55.069 regex: 00:27:55.069 00:27:55.069 vdpa: 00:27:55.069 00:27:55.069 event: 00:27:55.069 00:27:55.069 baseband: 00:27:55.069 00:27:55.069 gpu: 00:27:55.069 00:27:55.069 00:27:55.069 Message: 00:27:55.069 ================= 00:27:55.069 Content Skipped 00:27:55.069 ================= 00:27:55.069 00:27:55.069 apps: 00:27:55.069 00:27:55.069 libs: 00:27:55.069 kni: explicitly disabled via build config (deprecated lib) 00:27:55.069 flow_classify: explicitly disabled via build config (deprecated lib) 00:27:55.069 00:27:55.069 drivers: 00:27:55.069 common/cpt: not in enabled drivers build config 00:27:55.069 common/dpaax: not in enabled drivers build config 00:27:55.069 common/iavf: not in enabled drivers build config 00:27:55.069 common/idpf: not in enabled drivers build config 00:27:55.069 common/mvep: not in enabled drivers build config 00:27:55.069 common/octeontx: not in enabled drivers build config 00:27:55.069 bus/auxiliary: not in enabled drivers build config 00:27:55.069 bus/dpaa: not in enabled drivers build config 00:27:55.069 bus/fslmc: not in enabled drivers build config 00:27:55.069 bus/ifpga: not in enabled drivers build config 00:27:55.069 bus/vmbus: not in enabled drivers build config 00:27:55.069 common/cnxk: not in enabled drivers build config 00:27:55.069 common/mlx5: not in enabled drivers build config 00:27:55.069 common/qat: not in enabled drivers build config 00:27:55.069 common/sfc_efx: not in enabled drivers build config 00:27:55.069 mempool/bucket: not in enabled drivers build config 00:27:55.069 mempool/cnxk: not in enabled drivers build config 00:27:55.069 mempool/dpaa: not in enabled drivers build config 00:27:55.069 mempool/dpaa2: not in enabled drivers build config 00:27:55.069 mempool/octeontx: not in enabled drivers build config 00:27:55.069 mempool/stack: not in enabled drivers build config 00:27:55.069 dma/cnxk: not in enabled drivers build config 00:27:55.069 dma/dpaa: not in enabled drivers build config 00:27:55.069 dma/dpaa2: not in enabled drivers build config 00:27:55.069 dma/hisilicon: not in enabled drivers build config 00:27:55.069 dma/idxd: not in enabled drivers build config 00:27:55.069 dma/ioat: not in enabled drivers build config 00:27:55.069 dma/skeleton: not in enabled drivers build config 00:27:55.069 net/af_packet: not in enabled drivers build config 00:27:55.069 net/af_xdp: not in enabled drivers build config 00:27:55.069 net/ark: not in enabled drivers build config 00:27:55.069 net/atlantic: not in enabled drivers build config 00:27:55.069 net/avp: not in enabled drivers build config 00:27:55.069 net/axgbe: not in enabled drivers build config 00:27:55.069 net/bnx2x: not in enabled drivers build config 00:27:55.069 net/bnxt: not in enabled drivers build config 00:27:55.069 net/bonding: not in enabled drivers build config 00:27:55.069 net/cnxk: not in enabled drivers build config 00:27:55.069 net/cxgbe: not in enabled drivers build config 00:27:55.069 net/dpaa: not in enabled drivers build config 00:27:55.069 net/dpaa2: not in enabled drivers build config 00:27:55.069 net/e1000: not in enabled drivers build config 00:27:55.069 net/ena: not in enabled drivers build config 00:27:55.069 net/enetc: not in enabled drivers build config 00:27:55.069 net/enetfec: not in enabled drivers build config 00:27:55.069 net/enic: not in enabled drivers build config 00:27:55.069 net/failsafe: not in enabled drivers build config 00:27:55.069 net/fm10k: not in enabled drivers build config 00:27:55.069 net/gve: not in enabled drivers build config 00:27:55.069 net/hinic: not in enabled drivers build config 00:27:55.069 net/hns3: not in enabled drivers build config 00:27:55.069 net/iavf: not in enabled drivers build config 00:27:55.069 net/ice: not in enabled drivers build config 00:27:55.069 net/idpf: not in enabled drivers build config 00:27:55.069 net/igc: not in enabled drivers build config 00:27:55.069 net/ionic: not in enabled drivers build config 00:27:55.069 net/ipn3ke: not in enabled drivers build config 00:27:55.069 net/ixgbe: not in enabled drivers build config 00:27:55.069 net/kni: not in enabled drivers build config 00:27:55.069 net/liquidio: not in enabled drivers build config 00:27:55.069 net/mana: not in enabled drivers build config 00:27:55.070 net/memif: not in enabled drivers build config 00:27:55.070 net/mlx4: not in enabled drivers build config 00:27:55.070 net/mlx5: not in enabled drivers build config 00:27:55.070 net/mvneta: not in enabled drivers build config 00:27:55.070 net/mvpp2: not in enabled drivers build config 00:27:55.070 net/netvsc: not in enabled drivers build config 00:27:55.070 net/nfb: not in enabled drivers build config 00:27:55.070 net/nfp: not in enabled drivers build config 00:27:55.070 net/ngbe: not in enabled drivers build config 00:27:55.070 net/null: not in enabled drivers build config 00:27:55.070 net/octeontx: not in enabled drivers build config 00:27:55.070 net/octeon_ep: not in enabled drivers build config 00:27:55.070 net/pcap: not in enabled drivers build config 00:27:55.070 net/pfe: not in enabled drivers build config 00:27:55.070 net/qede: not in enabled drivers build config 00:27:55.070 net/ring: not in enabled drivers build config 00:27:55.070 net/sfc: not in enabled drivers build config 00:27:55.070 net/softnic: not in enabled drivers build config 00:27:55.070 net/tap: not in enabled drivers build config 00:27:55.070 net/thunderx: not in enabled drivers build config 00:27:55.070 net/txgbe: not in enabled drivers build config 00:27:55.070 net/vdev_netvsc: not in enabled drivers build config 00:27:55.070 net/vhost: not in enabled drivers build config 00:27:55.070 net/virtio: not in enabled drivers build config 00:27:55.070 net/vmxnet3: not in enabled drivers build config 00:27:55.070 raw/cnxk_bphy: not in enabled drivers build config 00:27:55.070 raw/cnxk_gpio: not in enabled drivers build config 00:27:55.070 raw/dpaa2_cmdif: not in enabled drivers build config 00:27:55.070 raw/ifpga: not in enabled drivers build config 00:27:55.070 raw/ntb: not in enabled drivers build config 00:27:55.070 raw/skeleton: not in enabled drivers build config 00:27:55.070 crypto/armv8: not in enabled drivers build config 00:27:55.070 crypto/bcmfs: not in enabled drivers build config 00:27:55.070 crypto/caam_jr: not in enabled drivers build config 00:27:55.070 crypto/ccp: not in enabled drivers build config 00:27:55.070 crypto/cnxk: not in enabled drivers build config 00:27:55.070 crypto/dpaa_sec: not in enabled drivers build config 00:27:55.070 crypto/dpaa2_sec: not in enabled drivers build config 00:27:55.070 crypto/ipsec_mb: not in enabled drivers build config 00:27:55.070 crypto/mlx5: not in enabled drivers build config 00:27:55.070 crypto/mvsam: not in enabled drivers build config 00:27:55.070 crypto/nitrox: not in enabled drivers build config 00:27:55.070 crypto/null: not in enabled drivers build config 00:27:55.070 crypto/octeontx: not in enabled drivers build config 00:27:55.070 crypto/openssl: not in enabled drivers build config 00:27:55.070 crypto/scheduler: not in enabled drivers build config 00:27:55.070 crypto/uadk: not in enabled drivers build config 00:27:55.070 crypto/virtio: not in enabled drivers build config 00:27:55.070 compress/isal: not in enabled drivers build config 00:27:55.070 compress/mlx5: not in enabled drivers build config 00:27:55.070 compress/octeontx: not in enabled drivers build config 00:27:55.070 compress/zlib: not in enabled drivers build config 00:27:55.070 regex/mlx5: not in enabled drivers build config 00:27:55.070 regex/cn9k: not in enabled drivers build config 00:27:55.070 vdpa/ifc: not in enabled drivers build config 00:27:55.070 vdpa/mlx5: not in enabled drivers build config 00:27:55.070 vdpa/sfc: not in enabled drivers build config 00:27:55.070 event/cnxk: not in enabled drivers build config 00:27:55.070 event/dlb2: not in enabled drivers build config 00:27:55.070 event/dpaa: not in enabled drivers build config 00:27:55.070 event/dpaa2: not in enabled drivers build config 00:27:55.070 event/dsw: not in enabled drivers build config 00:27:55.070 event/opdl: not in enabled drivers build config 00:27:55.070 event/skeleton: not in enabled drivers build config 00:27:55.070 event/sw: not in enabled drivers build config 00:27:55.070 event/octeontx: not in enabled drivers build config 00:27:55.070 baseband/acc: not in enabled drivers build config 00:27:55.070 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:27:55.070 baseband/fpga_lte_fec: not in enabled drivers build config 00:27:55.070 baseband/la12xx: not in enabled drivers build config 00:27:55.070 baseband/null: not in enabled drivers build config 00:27:55.070 baseband/turbo_sw: not in enabled drivers build config 00:27:55.070 gpu/cuda: not in enabled drivers build config 00:27:55.070 00:27:55.070 00:27:55.070 Build targets in project: 311 00:27:55.070 00:27:55.070 DPDK 22.11.4 00:27:55.070 00:27:55.070 User defined options 00:27:55.070 libdir : lib 00:27:55.070 prefix : /home/vagrant/spdk_repo/dpdk/build 00:27:55.070 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:27:55.070 c_link_args : 00:27:55.070 enable_docs : false 00:27:55.070 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:27:55.070 enable_kmods : false 00:27:55.070 machine : native 00:27:55.070 tests : false 00:27:55.070 00:27:55.070 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:27:55.070 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:27:55.070 14:41:14 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:27:55.070 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:27:55.330 [1/740] Generating lib/rte_kvargs_mingw with a custom command 00:27:55.330 [2/740] Generating lib/rte_kvargs_def with a custom command 00:27:55.330 [3/740] Generating lib/rte_telemetry_mingw with a custom command 00:27:55.330 [4/740] Generating lib/rte_telemetry_def with a custom command 00:27:55.330 [5/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:27:55.330 [6/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:27:55.330 [7/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:27:55.330 [8/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:27:55.330 [9/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:27:55.330 [10/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:27:55.330 [11/740] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:27:55.330 [12/740] Linking static target lib/librte_kvargs.a 00:27:55.330 [13/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:27:55.330 [14/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:27:55.589 [15/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:27:55.589 [16/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:27:55.589 [17/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:27:55.589 [18/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:27:55.589 [19/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:27:55.589 [20/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:27:55.589 [21/740] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:27:55.589 [22/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:27:55.589 [23/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:27:55.589 [24/740] Linking target lib/librte_kvargs.so.23.0 00:27:55.589 [25/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:27:55.589 [26/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:27:55.849 [27/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:27:55.849 [28/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:27:55.849 [29/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:27:55.849 [30/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:27:55.849 [31/740] Linking static target lib/librte_telemetry.a 00:27:55.849 [32/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:27:55.849 [33/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:27:55.849 [34/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:27:55.849 [35/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:27:55.849 [36/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:27:55.849 [37/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:27:55.849 [38/740] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:27:56.141 [39/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:27:56.141 [40/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:27:56.141 [41/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:27:56.141 [42/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:27:56.141 [43/740] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:27:56.141 [44/740] Linking target lib/librte_telemetry.so.23.0 00:27:56.141 [45/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:27:56.141 [46/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:27:56.141 [47/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:27:56.141 [48/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:27:56.141 [49/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:27:56.400 [50/740] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:27:56.400 [51/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:27:56.400 [52/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:27:56.400 [53/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:27:56.400 [54/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:27:56.400 [55/740] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:27:56.400 [56/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:27:56.400 [57/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:27:56.400 [58/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:27:56.400 [59/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:27:56.400 [60/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:27:56.400 [61/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:27:56.400 [62/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:27:56.400 [63/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:27:56.400 [64/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:27:56.400 [65/740] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:27:56.400 [66/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:27:56.400 [67/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:27:56.401 [68/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:27:56.660 [69/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:27:56.660 [70/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:27:56.660 [71/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:27:56.660 [72/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:27:56.660 [73/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:27:56.660 [74/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:27:56.660 [75/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:27:56.660 [76/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:27:56.660 [77/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:27:56.660 [78/740] Generating lib/rte_eal_def with a custom command 00:27:56.660 [79/740] Generating lib/rte_eal_mingw with a custom command 00:27:56.660 [80/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:27:56.660 [81/740] Generating lib/rte_ring_def with a custom command 00:27:56.660 [82/740] Generating lib/rte_ring_mingw with a custom command 00:27:56.660 [83/740] Generating lib/rte_rcu_mingw with a custom command 00:27:56.660 [84/740] Generating lib/rte_rcu_def with a custom command 00:27:56.660 [85/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:27:56.660 [86/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:27:56.919 [87/740] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:27:56.919 [88/740] Linking static target lib/librte_ring.a 00:27:56.919 [89/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:27:56.919 [90/740] Generating lib/rte_mempool_def with a custom command 00:27:56.919 [91/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:27:56.919 [92/740] Generating lib/rte_mempool_mingw with a custom command 00:27:56.919 [93/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:27:57.179 [94/740] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:27:57.179 [95/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:27:57.179 [96/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:27:57.179 [97/740] Generating lib/rte_mbuf_def with a custom command 00:27:57.179 [98/740] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:27:57.179 [99/740] Generating lib/rte_mbuf_mingw with a custom command 00:27:57.179 [100/740] Linking static target lib/librte_eal.a 00:27:57.179 [101/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:27:57.438 [102/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:27:57.438 [103/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:27:57.438 [104/740] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:27:57.438 [105/740] Linking static target lib/librte_rcu.a 00:27:57.698 [106/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:27:57.698 [107/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:27:57.698 [108/740] Linking static target lib/librte_mempool.a 00:27:57.698 [109/740] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:27:57.698 [110/740] Generating lib/rte_net_def with a custom command 00:27:57.698 [111/740] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:27:57.698 [112/740] Linking static target lib/net/libnet_crc_avx512_lib.a 00:27:57.698 [113/740] Generating lib/rte_net_mingw with a custom command 00:27:57.698 [114/740] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:27:57.698 [115/740] Generating lib/rte_meter_def with a custom command 00:27:57.698 [116/740] Generating lib/rte_meter_mingw with a custom command 00:27:57.698 [117/740] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:27:57.698 [118/740] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:27:57.698 [119/740] Linking static target lib/librte_meter.a 00:27:57.958 [120/740] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:27:57.958 [121/740] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:27:57.958 [122/740] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:27:57.958 [123/740] Linking static target lib/librte_net.a 00:27:57.958 [124/740] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:27:58.218 [125/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:27:58.218 [126/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:27:58.218 [127/740] Linking static target lib/librte_mbuf.a 00:27:58.218 [128/740] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:27:58.218 [129/740] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:27:58.218 [130/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:27:58.218 [131/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:27:58.218 [132/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:27:58.478 [133/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:27:58.737 [134/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:27:58.737 [135/740] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:27:58.737 [136/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:27:58.737 [137/740] Generating lib/rte_ethdev_def with a custom command 00:27:58.737 [138/740] Generating lib/rte_ethdev_mingw with a custom command 00:27:58.737 [139/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:27:58.737 [140/740] Generating lib/rte_pci_def with a custom command 00:27:58.996 [141/740] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:27:58.996 [142/740] Generating lib/rte_pci_mingw with a custom command 00:27:58.996 [143/740] Linking static target lib/librte_pci.a 00:27:58.996 [144/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:27:58.996 [145/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:27:58.996 [146/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:27:58.996 [147/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:27:58.996 [148/740] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:27:58.996 [149/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:27:59.255 [150/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:27:59.255 [151/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:27:59.255 [152/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:27:59.255 [153/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:27:59.255 [154/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:27:59.256 [155/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:27:59.256 [156/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:27:59.256 [157/740] Generating lib/rte_cmdline_def with a custom command 00:27:59.256 [158/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:27:59.256 [159/740] Generating lib/rte_cmdline_mingw with a custom command 00:27:59.256 [160/740] Generating lib/rte_metrics_def with a custom command 00:27:59.256 [161/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:27:59.256 [162/740] Generating lib/rte_metrics_mingw with a custom command 00:27:59.256 [163/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:27:59.515 [164/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:27:59.515 [165/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:27:59.515 [166/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:27:59.515 [167/740] Linking static target lib/librte_cmdline.a 00:27:59.515 [168/740] Generating lib/rte_hash_def with a custom command 00:27:59.515 [169/740] Generating lib/rte_hash_mingw with a custom command 00:27:59.515 [170/740] Generating lib/rte_timer_def with a custom command 00:27:59.515 [171/740] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:27:59.515 [172/740] Generating lib/rte_timer_mingw with a custom command 00:27:59.515 [173/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:27:59.774 [174/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:27:59.774 [175/740] Linking static target lib/librte_metrics.a 00:27:59.774 [176/740] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:27:59.774 [177/740] Linking static target lib/librte_timer.a 00:28:00.032 [178/740] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:28:00.032 [179/740] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:28:00.032 [180/740] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:28:00.289 [181/740] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:28:00.289 [182/740] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:28:00.289 [183/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:28:00.289 [184/740] Generating lib/rte_acl_def with a custom command 00:28:00.290 [185/740] Generating lib/rte_acl_mingw with a custom command 00:28:00.549 [186/740] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:28:00.549 [187/740] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:28:00.549 [188/740] Generating lib/rte_bbdev_def with a custom command 00:28:00.549 [189/740] Generating lib/rte_bbdev_mingw with a custom command 00:28:00.549 [190/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:28:00.549 [191/740] Generating lib/rte_bitratestats_def with a custom command 00:28:00.549 [192/740] Linking static target lib/librte_ethdev.a 00:28:00.549 [193/740] Generating lib/rte_bitratestats_mingw with a custom command 00:28:00.839 [194/740] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:28:00.839 [195/740] Linking static target lib/librte_bitratestats.a 00:28:00.839 [196/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:28:01.097 [197/740] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:28:01.097 [198/740] Linking static target lib/librte_bbdev.a 00:28:01.097 [199/740] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:28:01.097 [200/740] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:28:01.356 [201/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:28:01.615 [202/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:28:01.615 [203/740] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:28:01.615 [204/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:28:01.874 [205/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:28:01.874 [206/740] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:28:01.874 [207/740] Linking static target lib/librte_hash.a 00:28:02.134 [208/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:28:02.134 [209/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:28:02.134 [210/740] Generating lib/rte_bpf_def with a custom command 00:28:02.134 [211/740] Generating lib/rte_bpf_mingw with a custom command 00:28:02.393 [212/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:28:02.393 [213/740] Generating lib/rte_cfgfile_def with a custom command 00:28:02.393 [214/740] Generating lib/rte_cfgfile_mingw with a custom command 00:28:02.393 [215/740] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:28:02.393 [216/740] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:28:02.393 [217/740] Linking static target lib/librte_cfgfile.a 00:28:02.652 [218/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:28:02.652 [219/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:28:02.652 [220/740] Generating lib/rte_compressdev_def with a custom command 00:28:02.652 [221/740] Generating lib/rte_compressdev_mingw with a custom command 00:28:02.652 [222/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:28:02.911 [223/740] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:28:02.911 [224/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:28:02.911 [225/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:28:02.911 [226/740] Linking static target lib/librte_bpf.a 00:28:02.911 [227/740] Generating lib/rte_cryptodev_def with a custom command 00:28:02.911 [228/740] Generating lib/rte_cryptodev_mingw with a custom command 00:28:03.170 [229/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:28:03.170 [230/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:28:03.170 [231/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:28:03.170 [232/740] Linking static target lib/librte_compressdev.a 00:28:03.170 [233/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:28:03.170 [234/740] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:28:03.170 [235/740] Linking static target lib/librte_acl.a 00:28:03.170 [236/740] Generating lib/rte_distributor_def with a custom command 00:28:03.429 [237/740] Generating lib/rte_distributor_mingw with a custom command 00:28:03.429 [238/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:28:03.429 [239/740] Generating lib/rte_efd_def with a custom command 00:28:03.429 [240/740] Generating lib/rte_efd_mingw with a custom command 00:28:03.429 [241/740] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:28:03.429 [242/740] Linking target lib/librte_eal.so.23.0 00:28:03.429 [243/740] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:28:03.429 [244/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:28:03.689 [245/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:28:03.689 [246/740] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:28:03.689 [247/740] Linking target lib/librte_ring.so.23.0 00:28:03.689 [248/740] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:28:03.689 [249/740] Linking target lib/librte_rcu.so.23.0 00:28:03.689 [250/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:28:03.948 [251/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:28:03.948 [252/740] Linking target lib/librte_mempool.so.23.0 00:28:03.948 [253/740] Linking target lib/librte_meter.so.23.0 00:28:03.948 [254/740] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:28:03.948 [255/740] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:28:03.948 [256/740] Linking target lib/librte_pci.so.23.0 00:28:03.948 [257/740] Linking target lib/librte_timer.so.23.0 00:28:03.948 [258/740] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:28:03.948 [259/740] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:28:03.948 [260/740] Linking target lib/librte_mbuf.so.23.0 00:28:03.948 [261/740] Linking target lib/librte_acl.so.23.0 00:28:03.948 [262/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:28:03.948 [263/740] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:28:03.948 [264/740] Linking static target lib/librte_distributor.a 00:28:04.208 [265/740] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:28:04.208 [266/740] Linking target lib/librte_cfgfile.so.23.0 00:28:04.208 [267/740] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:28:04.208 [268/740] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:28:04.208 [269/740] Linking target lib/librte_net.so.23.0 00:28:04.208 [270/740] Linking target lib/librte_bbdev.so.23.0 00:28:04.208 [271/740] Linking target lib/librte_compressdev.so.23.0 00:28:04.208 [272/740] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:28:04.208 [273/740] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:28:04.467 [274/740] Linking target lib/librte_cmdline.so.23.0 00:28:04.467 [275/740] Linking target lib/librte_hash.so.23.0 00:28:04.467 [276/740] Linking target lib/librte_distributor.so.23.0 00:28:04.467 [277/740] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:28:04.467 [278/740] Generating lib/rte_eventdev_def with a custom command 00:28:04.467 [279/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:28:04.467 [280/740] Generating lib/rte_eventdev_mingw with a custom command 00:28:04.467 [281/740] Generating lib/rte_gpudev_def with a custom command 00:28:04.467 [282/740] Generating lib/rte_gpudev_mingw with a custom command 00:28:04.726 [283/740] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:28:04.726 [284/740] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:28:04.726 [285/740] Linking static target lib/librte_efd.a 00:28:04.726 [286/740] Linking target lib/librte_ethdev.so.23.0 00:28:04.726 [287/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:28:04.985 [288/740] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:28:04.985 [289/740] Linking target lib/librte_metrics.so.23.0 00:28:04.986 [290/740] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:28:04.986 [291/740] Linking target lib/librte_bpf.so.23.0 00:28:04.986 [292/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:28:04.986 [293/740] Linking static target lib/librte_cryptodev.a 00:28:04.986 [294/740] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:28:04.986 [295/740] Linking target lib/librte_bitratestats.so.23.0 00:28:05.244 [296/740] Linking target lib/librte_efd.so.23.0 00:28:05.244 [297/740] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:28:05.244 [298/740] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:28:05.245 [299/740] Linking static target lib/librte_gpudev.a 00:28:05.245 [300/740] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:28:05.245 [301/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:28:05.245 [302/740] Generating lib/rte_gro_def with a custom command 00:28:05.541 [303/740] Generating lib/rte_gro_mingw with a custom command 00:28:05.541 [304/740] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:28:05.541 [305/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:28:05.541 [306/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:28:05.799 [307/740] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:28:05.799 [308/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:28:05.799 [309/740] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:28:05.799 [310/740] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:28:05.799 [311/740] Generating lib/rte_gso_def with a custom command 00:28:05.799 [312/740] Generating lib/rte_gso_mingw with a custom command 00:28:05.799 [313/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:28:05.799 [314/740] Linking static target lib/librte_gro.a 00:28:05.799 [315/740] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:28:06.058 [316/740] Linking target lib/librte_gpudev.so.23.0 00:28:06.058 [317/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:28:06.058 [318/740] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:28:06.058 [319/740] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:28:06.058 [320/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:28:06.058 [321/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:28:06.058 [322/740] Linking static target lib/librte_eventdev.a 00:28:06.058 [323/740] Linking target lib/librte_gro.so.23.0 00:28:06.318 [324/740] Generating lib/rte_ip_frag_def with a custom command 00:28:06.318 [325/740] Generating lib/rte_ip_frag_mingw with a custom command 00:28:06.318 [326/740] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:28:06.318 [327/740] Linking static target lib/librte_gso.a 00:28:06.318 [328/740] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:28:06.318 [329/740] Linking static target lib/librte_jobstats.a 00:28:06.318 [330/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:28:06.318 [331/740] Generating lib/rte_jobstats_def with a custom command 00:28:06.318 [332/740] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:28:06.318 [333/740] Generating lib/rte_jobstats_mingw with a custom command 00:28:06.318 [334/740] Linking target lib/librte_gso.so.23.0 00:28:06.581 [335/740] Generating lib/rte_latencystats_def with a custom command 00:28:06.581 [336/740] Generating lib/rte_latencystats_mingw with a custom command 00:28:06.581 [337/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:28:06.581 [338/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:28:06.581 [339/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:28:06.581 [340/740] Generating lib/rte_lpm_def with a custom command 00:28:06.581 [341/740] Generating lib/rte_lpm_mingw with a custom command 00:28:06.581 [342/740] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:28:06.581 [343/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:28:06.840 [344/740] Linking target lib/librte_jobstats.so.23.0 00:28:06.840 [345/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:28:06.840 [346/740] Linking static target lib/librte_ip_frag.a 00:28:06.840 [347/740] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:28:06.840 [348/740] Linking target lib/librte_cryptodev.so.23.0 00:28:07.099 [349/740] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:28:07.099 [350/740] Linking static target lib/librte_latencystats.a 00:28:07.099 [351/740] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:28:07.099 [352/740] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:28:07.099 [353/740] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:28:07.099 [354/740] Linking target lib/librte_ip_frag.so.23.0 00:28:07.099 [355/740] Generating lib/rte_member_def with a custom command 00:28:07.099 [356/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:28:07.099 [357/740] Generating lib/rte_member_mingw with a custom command 00:28:07.099 [358/740] Generating lib/rte_pcapng_def with a custom command 00:28:07.099 [359/740] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:28:07.099 [360/740] Linking static target lib/member/libsketch_avx512_tmp.a 00:28:07.099 [361/740] Generating lib/rte_pcapng_mingw with a custom command 00:28:07.099 [362/740] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:28:07.359 [363/740] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:28:07.359 [364/740] Linking target lib/librte_latencystats.so.23.0 00:28:07.359 [365/740] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:28:07.359 [366/740] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:28:07.359 [367/740] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:28:07.359 [368/740] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:28:07.618 [369/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:28:07.618 [370/740] Linking static target lib/librte_lpm.a 00:28:07.618 [371/740] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:28:07.618 [372/740] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:28:07.618 [373/740] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:28:07.618 [374/740] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:28:07.618 [375/740] Generating lib/rte_power_def with a custom command 00:28:07.877 [376/740] Generating lib/rte_power_mingw with a custom command 00:28:07.877 [377/740] Generating lib/rte_rawdev_def with a custom command 00:28:07.877 [378/740] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:28:07.877 [379/740] Generating lib/rte_rawdev_mingw with a custom command 00:28:07.877 [380/740] Generating lib/rte_regexdev_def with a custom command 00:28:07.877 [381/740] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:28:07.877 [382/740] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:28:07.877 [383/740] Linking static target lib/librte_pcapng.a 00:28:07.877 [384/740] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:28:07.877 [385/740] Generating lib/rte_regexdev_mingw with a custom command 00:28:07.877 [386/740] Linking target lib/librte_eventdev.so.23.0 00:28:07.877 [387/740] Linking target lib/librte_lpm.so.23.0 00:28:08.136 [388/740] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:28:08.136 [389/740] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:28:08.136 [390/740] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:28:08.136 [391/740] Generating lib/rte_dmadev_def with a custom command 00:28:08.136 [392/740] Generating lib/rte_dmadev_mingw with a custom command 00:28:08.136 [393/740] Generating lib/rte_rib_def with a custom command 00:28:08.136 [394/740] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:28:08.136 [395/740] Linking static target lib/librte_rawdev.a 00:28:08.136 [396/740] Generating lib/rte_rib_mingw with a custom command 00:28:08.136 [397/740] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:28:08.136 [398/740] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:28:08.136 [399/740] Generating lib/rte_reorder_def with a custom command 00:28:08.136 [400/740] Linking target lib/librte_pcapng.so.23.0 00:28:08.136 [401/740] Generating lib/rte_reorder_mingw with a custom command 00:28:08.136 [402/740] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:28:08.136 [403/740] Linking static target lib/librte_power.a 00:28:08.395 [404/740] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:28:08.395 [405/740] Linking static target lib/librte_dmadev.a 00:28:08.395 [406/740] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:28:08.395 [407/740] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:28:08.395 [408/740] Linking static target lib/librte_regexdev.a 00:28:08.395 [409/740] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:28:08.654 [410/740] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:28:08.654 [411/740] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:28:08.654 [412/740] Linking target lib/librte_rawdev.so.23.0 00:28:08.654 [413/740] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:28:08.654 [414/740] Generating lib/rte_sched_def with a custom command 00:28:08.654 [415/740] Generating lib/rte_sched_mingw with a custom command 00:28:08.654 [416/740] Generating lib/rte_security_def with a custom command 00:28:08.654 [417/740] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:28:08.654 [418/740] Linking static target lib/librte_member.a 00:28:08.654 [419/740] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:28:08.654 [420/740] Generating lib/rte_security_mingw with a custom command 00:28:08.654 [421/740] Linking static target lib/librte_reorder.a 00:28:08.654 [422/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:28:08.654 [423/740] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:28:08.913 [424/740] Linking target lib/librte_dmadev.so.23.0 00:28:08.913 [425/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:28:08.913 [426/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:28:08.913 [427/740] Generating lib/rte_stack_def with a custom command 00:28:08.913 [428/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:28:08.913 [429/740] Linking static target lib/librte_stack.a 00:28:08.913 [430/740] Generating lib/rte_stack_mingw with a custom command 00:28:08.913 [431/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:28:08.913 [432/740] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:28:08.913 [433/740] Linking static target lib/librte_rib.a 00:28:08.913 [434/740] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:28:08.913 [435/740] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:28:08.913 [436/740] Linking target lib/librte_reorder.so.23.0 00:28:08.913 [437/740] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:28:08.913 [438/740] Linking target lib/librte_regexdev.so.23.0 00:28:08.913 [439/740] Linking target lib/librte_member.so.23.0 00:28:09.172 [440/740] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:28:09.172 [441/740] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:28:09.172 [442/740] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:28:09.172 [443/740] Linking target lib/librte_stack.so.23.0 00:28:09.172 [444/740] Linking target lib/librte_power.so.23.0 00:28:09.172 [445/740] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:28:09.172 [446/740] Linking static target lib/librte_security.a 00:28:09.172 [447/740] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:28:09.431 [448/740] Linking target lib/librte_rib.so.23.0 00:28:09.431 [449/740] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:28:09.431 [450/740] Generating lib/rte_vhost_def with a custom command 00:28:09.431 [451/740] Generating lib/rte_vhost_mingw with a custom command 00:28:09.431 [452/740] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:28:09.431 [453/740] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:28:09.690 [454/740] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:28:09.690 [455/740] Linking target lib/librte_security.so.23.0 00:28:09.690 [456/740] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:28:09.690 [457/740] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:28:09.948 [458/740] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:28:09.948 [459/740] Linking static target lib/librte_sched.a 00:28:10.207 [460/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:28:10.207 [461/740] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:28:10.207 [462/740] Generating lib/rte_ipsec_def with a custom command 00:28:10.207 [463/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:28:10.207 [464/740] Generating lib/rte_ipsec_mingw with a custom command 00:28:10.207 [465/740] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:28:10.207 [466/740] Linking target lib/librte_sched.so.23.0 00:28:10.466 [467/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:28:10.466 [468/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:28:10.466 [469/740] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:28:10.466 [470/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:28:10.726 [471/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:28:10.726 [472/740] Generating lib/rte_fib_def with a custom command 00:28:10.726 [473/740] Generating lib/rte_fib_mingw with a custom command 00:28:10.726 [474/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:28:10.986 [475/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:28:10.986 [476/740] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:28:11.245 [477/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:28:11.245 [478/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:28:11.245 [479/740] Linking static target lib/librte_ipsec.a 00:28:11.245 [480/740] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:28:11.245 [481/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:28:11.245 [482/740] Linking static target lib/librte_fib.a 00:28:11.504 [483/740] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:28:11.504 [484/740] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:28:11.504 [485/740] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:28:11.764 [486/740] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:28:11.764 [487/740] Linking target lib/librte_ipsec.so.23.0 00:28:11.764 [488/740] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:28:11.764 [489/740] Linking target lib/librte_fib.so.23.0 00:28:11.764 [490/740] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:28:11.764 [491/740] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:28:12.333 [492/740] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:28:12.333 [493/740] Generating lib/rte_port_def with a custom command 00:28:12.333 [494/740] Generating lib/rte_port_mingw with a custom command 00:28:12.333 [495/740] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:28:12.333 [496/740] Generating lib/rte_pdump_def with a custom command 00:28:12.333 [497/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:28:12.333 [498/740] Generating lib/rte_pdump_mingw with a custom command 00:28:12.333 [499/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:28:12.593 [500/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:28:12.593 [501/740] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:28:12.593 [502/740] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:28:12.593 [503/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:28:12.593 [504/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:28:12.593 [505/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:28:13.170 [506/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:28:13.170 [507/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:28:13.170 [508/740] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:28:13.170 [509/740] Linking static target lib/librte_port.a 00:28:13.170 [510/740] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:28:13.170 [511/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:28:13.170 [512/740] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:28:13.450 [513/740] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:28:13.450 [514/740] Linking static target lib/librte_pdump.a 00:28:13.450 [515/740] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:28:13.708 [516/740] Linking target lib/librte_port.so.23.0 00:28:13.709 [517/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:28:13.709 [518/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:28:13.709 [519/740] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:28:13.709 [520/740] Linking target lib/librte_pdump.so.23.0 00:28:13.709 [521/740] Generating lib/rte_table_def with a custom command 00:28:13.709 [522/740] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:28:13.709 [523/740] Generating lib/rte_table_mingw with a custom command 00:28:13.968 [524/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:28:13.968 [525/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:28:13.968 [526/740] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:28:13.968 [527/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:28:13.968 [528/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:28:14.228 [529/740] Generating lib/rte_pipeline_def with a custom command 00:28:14.228 [530/740] Generating lib/rte_pipeline_mingw with a custom command 00:28:14.228 [531/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:28:14.228 [532/740] Linking static target lib/librte_table.a 00:28:14.228 [533/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:28:14.487 [534/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:28:14.746 [535/740] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:28:14.746 [536/740] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:28:14.746 [537/740] Linking target lib/librte_table.so.23.0 00:28:14.746 [538/740] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:28:14.746 [539/740] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:28:15.005 [540/740] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:28:15.005 [541/740] Generating lib/rte_graph_def with a custom command 00:28:15.005 [542/740] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:28:15.005 [543/740] Generating lib/rte_graph_mingw with a custom command 00:28:15.005 [544/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:28:15.265 [545/740] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:28:15.265 [546/740] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:28:15.265 [547/740] Linking static target lib/librte_graph.a 00:28:15.265 [548/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:28:15.524 [549/740] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:28:15.524 [550/740] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:28:15.524 [551/740] Compiling C object lib/librte_node.a.p/node_null.c.o 00:28:15.524 [552/740] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:28:15.783 [553/740] Compiling C object lib/librte_node.a.p/node_log.c.o 00:28:15.783 [554/740] Generating lib/rte_node_def with a custom command 00:28:15.783 [555/740] Generating lib/rte_node_mingw with a custom command 00:28:16.043 [556/740] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:28:16.043 [557/740] Linking target lib/librte_graph.so.23.0 00:28:16.043 [558/740] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:28:16.043 [559/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:28:16.043 [560/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:28:16.043 [561/740] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:28:16.302 [562/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:28:16.302 [563/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:28:16.302 [564/740] Generating drivers/rte_bus_pci_def with a custom command 00:28:16.302 [565/740] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:28:16.302 [566/740] Generating drivers/rte_bus_pci_mingw with a custom command 00:28:16.302 [567/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:28:16.302 [568/740] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:28:16.302 [569/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:28:16.302 [570/740] Generating drivers/rte_bus_vdev_def with a custom command 00:28:16.302 [571/740] Generating drivers/rte_bus_vdev_mingw with a custom command 00:28:16.302 [572/740] Generating drivers/rte_mempool_ring_def with a custom command 00:28:16.302 [573/740] Generating drivers/rte_mempool_ring_mingw with a custom command 00:28:16.302 [574/740] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:28:16.302 [575/740] Linking static target lib/librte_node.a 00:28:16.560 [576/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:28:16.560 [577/740] Linking static target drivers/libtmp_rte_bus_vdev.a 00:28:16.560 [578/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:28:16.560 [579/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:28:16.560 [580/740] Linking static target drivers/libtmp_rte_bus_pci.a 00:28:16.560 [581/740] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:28:16.560 [582/740] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:28:16.819 [583/740] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:28:16.819 [584/740] Linking target lib/librte_node.so.23.0 00:28:16.819 [585/740] Linking static target drivers/librte_bus_vdev.a 00:28:16.819 [586/740] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:28:16.819 [587/740] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:28:16.819 [588/740] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:28:16.819 [589/740] Linking static target drivers/librte_bus_pci.a 00:28:16.819 [590/740] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:28:16.819 [591/740] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:28:16.819 [592/740] Linking target drivers/librte_bus_vdev.so.23.0 00:28:17.078 [593/740] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:28:17.078 [594/740] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:28:17.078 [595/740] Linking target drivers/librte_bus_pci.so.23.0 00:28:17.078 [596/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:28:17.078 [597/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:28:17.078 [598/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:28:17.336 [599/740] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:28:17.337 [600/740] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:28:17.337 [601/740] Linking static target drivers/libtmp_rte_mempool_ring.a 00:28:17.686 [602/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:28:17.686 [603/740] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:28:17.686 [604/740] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:28:17.686 [605/740] Linking static target drivers/librte_mempool_ring.a 00:28:17.686 [606/740] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:28:17.686 [607/740] Linking target drivers/librte_mempool_ring.so.23.0 00:28:17.686 [608/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:28:17.945 [609/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:28:18.203 [610/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:28:18.462 [611/740] Linking static target drivers/net/i40e/base/libi40e_base.a 00:28:18.722 [612/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:28:18.980 [613/740] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:28:18.980 [614/740] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:28:18.980 [615/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:28:18.980 [616/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:28:19.546 [617/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:28:19.546 [618/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:28:19.546 [619/740] Generating drivers/rte_net_i40e_def with a custom command 00:28:19.546 [620/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:28:19.546 [621/740] Generating drivers/rte_net_i40e_mingw with a custom command 00:28:20.112 [622/740] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:28:20.371 [623/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:28:20.630 [624/740] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:28:20.630 [625/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:28:20.630 [626/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:28:20.630 [627/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:28:20.894 [628/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:28:20.894 [629/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:28:20.894 [630/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:28:21.156 [631/740] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:28:21.156 [632/740] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:28:21.156 [633/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:28:21.416 [634/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:28:21.675 [635/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:28:21.675 [636/740] Linking static target drivers/libtmp_rte_net_i40e.a 00:28:21.675 [637/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:28:21.675 [638/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:28:21.934 [639/740] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:28:21.934 [640/740] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:28:21.934 [641/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:28:21.934 [642/740] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:28:21.934 [643/740] Linking static target drivers/librte_net_i40e.a 00:28:22.194 [644/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:28:22.194 [645/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:28:22.194 [646/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:28:22.194 [647/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:28:22.454 [648/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:28:22.454 [649/740] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:28:22.454 [650/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:28:22.454 [651/740] Linking target drivers/librte_net_i40e.so.23.0 00:28:22.713 [652/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:28:22.972 [653/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:28:22.972 [654/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:28:22.972 [655/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:28:22.972 [656/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:28:22.972 [657/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:28:23.231 [658/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:28:23.231 [659/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:28:23.231 [660/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:28:23.231 [661/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:28:23.490 [662/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:28:23.490 [663/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:28:23.750 [664/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:28:23.750 [665/740] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:28:23.750 [666/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:28:23.750 [667/740] Linking static target lib/librte_vhost.a 00:28:24.009 [668/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:28:24.268 [669/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:28:24.268 [670/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:28:24.527 [671/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:28:24.786 [672/740] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:28:24.786 [673/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:28:24.786 [674/740] Linking target lib/librte_vhost.so.23.0 00:28:24.786 [675/740] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:28:24.786 [676/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:28:24.786 [677/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:28:24.786 [678/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:28:25.046 [679/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:28:25.046 [680/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:28:25.305 [681/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:28:25.305 [682/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:28:25.305 [683/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:28:25.305 [684/740] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:28:25.305 [685/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:28:25.565 [686/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:28:25.565 [687/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:28:25.565 [688/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:28:25.824 [689/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:28:25.824 [690/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:28:26.083 [691/740] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:28:26.083 [692/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:28:26.083 [693/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:28:26.083 [694/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:28:26.341 [695/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:28:26.341 [696/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:28:26.599 [697/740] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:28:26.599 [698/740] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:28:26.859 [699/740] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:28:26.859 [700/740] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:28:27.118 [701/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:28:27.377 [702/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:28:27.377 [703/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:28:27.377 [704/740] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:28:27.637 [705/740] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:28:27.637 [706/740] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:28:27.637 [707/740] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:28:27.896 [708/740] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:28:27.896 [709/740] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:28:28.462 [710/740] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:28:28.462 [711/740] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:28:28.462 [712/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:28:28.462 [713/740] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:28:28.462 [714/740] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:28:28.462 [715/740] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:28:28.462 [716/740] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:28:28.721 [717/740] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:28:28.979 [718/740] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:28:29.238 [719/740] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:28:29.497 [720/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:28:29.756 [721/740] Linking static target lib/librte_pipeline.a 00:28:30.015 [722/740] Linking target app/dpdk-pdump 00:28:30.015 [723/740] Linking target app/dpdk-test-acl 00:28:30.015 [724/740] Linking target app/dpdk-test-cmdline 00:28:30.015 [725/740] Linking target app/dpdk-test-compress-perf 00:28:30.015 [726/740] Linking target app/dpdk-dumpcap 00:28:30.015 [727/740] Linking target app/dpdk-test-bbdev 00:28:30.015 [728/740] Linking target app/dpdk-test-eventdev 00:28:30.015 [729/740] Linking target app/dpdk-proc-info 00:28:30.015 [730/740] Linking target app/dpdk-test-crypto-perf 00:28:30.583 [731/740] Linking target app/dpdk-test-fib 00:28:30.583 [732/740] Linking target app/dpdk-test-gpudev 00:28:30.583 [733/740] Linking target app/dpdk-test-flow-perf 00:28:30.583 [734/740] Linking target app/dpdk-test-pipeline 00:28:30.583 [735/740] Linking target app/dpdk-test-sad 00:28:30.583 [736/740] Linking target app/dpdk-test-security-perf 00:28:30.583 [737/740] Linking target app/dpdk-testpmd 00:28:30.583 [738/740] Linking target app/dpdk-test-regex 00:28:34.775 [739/740] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:28:34.775 [740/740] Linking target lib/librte_pipeline.so.23.0 00:28:34.775 14:41:53 build_native_dpdk -- common/autobuild_common.sh@187 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:28:34.775 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:28:34.775 [0/1] Installing files. 00:28:34.775 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:28:34.775 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:28:34.775 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:28:34.775 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:28:34.775 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:28:34.775 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:28:34.775 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:28:34.775 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:28:34.775 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:28:34.775 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:28:34.775 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:28:34.775 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:28:34.775 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:28:34.775 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:28:34.775 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:28:34.775 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:28:34.775 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:28:34.775 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:28:34.775 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:28:34.775 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:28:34.775 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:28:34.775 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:28:34.775 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:28:34.775 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:28:34.775 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:28:34.775 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:28:34.775 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:28:34.775 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:28:34.775 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:28:34.775 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:28:34.775 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:28:34.775 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:28:34.775 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:28:34.775 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:28:34.775 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:28:34.775 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:28:34.775 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:28:34.775 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:28:34.775 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:28:34.775 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:28:34.775 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:28:34.775 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:28:34.775 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:28:34.775 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:28:34.775 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:28:34.775 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:28:34.775 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:28:34.775 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:28:34.775 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:28:34.775 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:28:34.775 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:28:34.775 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:28:34.775 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:28:34.775 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:28:34.775 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:28:34.776 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:34.777 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:28:34.778 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:28:34.779 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:28:34.780 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:28:34.780 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:28:34.780 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:28:34.780 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:28:34.780 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:34.780 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:34.780 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:34.780 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:34.780 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:34.780 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:34.780 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:34.780 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:34.780 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:34.780 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:34.780 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:34.780 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:34.780 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:34.780 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:34.780 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.043 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.044 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.044 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.044 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.044 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.044 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.044 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.044 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.044 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.044 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.044 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.044 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.044 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.044 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.044 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.044 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.044 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.044 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.044 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.044 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.044 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.044 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.044 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.044 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.044 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.044 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.044 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.044 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.044 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.044 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.044 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.044 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.044 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.044 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.044 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.044 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:28:35.044 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.044 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:28:35.044 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.044 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:28:35.044 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.044 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:28:35.044 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:28:35.044 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:28:35.044 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:28:35.044 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:28:35.044 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:28:35.044 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:28:35.044 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:28:35.044 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:28:35.044 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:28:35.044 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:28:35.044 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:28:35.044 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:28:35.044 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:28:35.044 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:28:35.044 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:28:35.044 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:28:35.044 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.044 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.045 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.046 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.047 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.047 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.047 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.047 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.047 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.047 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.047 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.047 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.047 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.047 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.047 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.047 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.047 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.047 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.047 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.047 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.047 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.047 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.047 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.047 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.047 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.047 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.047 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.047 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.047 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.047 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.047 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:28:35.047 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:28:35.047 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:28:35.047 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:28:35.047 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:28:35.047 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:28:35.047 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:28:35.047 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:28:35.047 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:28:35.047 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:28:35.047 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:28:35.047 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:28:35.047 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:28:35.047 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:28:35.047 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:28:35.047 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:28:35.047 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:28:35.047 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:28:35.047 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:28:35.047 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:28:35.047 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:28:35.047 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:28:35.047 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:28:35.047 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:28:35.047 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:28:35.047 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:28:35.047 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:28:35.047 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:28:35.047 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:28:35.047 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:28:35.047 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:28:35.047 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:28:35.047 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:28:35.047 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:28:35.047 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:28:35.047 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:28:35.047 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:28:35.047 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:28:35.047 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:28:35.047 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:28:35.047 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:28:35.047 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:28:35.047 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:28:35.047 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:28:35.047 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:28:35.047 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:28:35.047 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:28:35.047 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:28:35.047 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:28:35.047 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:28:35.047 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:28:35.047 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:28:35.047 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:28:35.047 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:28:35.047 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:28:35.047 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:28:35.047 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:28:35.047 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:28:35.047 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:28:35.047 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:28:35.047 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:28:35.047 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:28:35.047 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:28:35.047 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:28:35.047 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:28:35.047 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:28:35.047 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:28:35.047 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:28:35.047 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:28:35.047 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:28:35.047 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:28:35.047 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:28:35.047 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:28:35.047 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:28:35.047 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:28:35.047 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:28:35.047 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:28:35.047 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:28:35.047 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:28:35.047 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:28:35.047 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:28:35.047 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:28:35.048 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:28:35.048 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:28:35.048 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:28:35.048 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:28:35.048 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:28:35.048 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:28:35.048 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:28:35.048 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:28:35.048 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:28:35.048 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:28:35.048 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:28:35.048 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:28:35.048 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:28:35.048 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:28:35.048 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:28:35.048 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:28:35.048 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:28:35.048 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:28:35.048 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:28:35.048 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:28:35.048 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:28:35.048 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:28:35.048 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:28:35.048 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:28:35.048 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:28:35.048 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:28:35.048 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:28:35.048 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:28:35.048 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:28:35.048 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:28:35.048 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:28:35.048 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:28:35.048 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:28:35.048 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:28:35.048 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:28:35.048 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:28:35.048 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:28:35.048 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:28:35.048 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:28:35.048 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:28:35.048 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:28:35.048 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:28:35.048 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:28:35.048 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:28:35.048 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:28:35.048 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:28:35.048 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:28:35.048 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:28:35.048 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:28:35.048 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:28:35.048 14:41:54 build_native_dpdk -- common/autobuild_common.sh@189 -- $ uname -s 00:28:35.048 14:41:54 build_native_dpdk -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:28:35.048 14:41:54 build_native_dpdk -- common/autobuild_common.sh@200 -- $ cat 00:28:35.048 14:41:54 build_native_dpdk -- common/autobuild_common.sh@205 -- $ cd /home/vagrant/spdk_repo/spdk 00:28:35.048 00:28:35.048 real 0m46.993s 00:28:35.048 user 4m55.633s 00:28:35.048 sys 0m50.194s 00:28:35.048 14:41:54 build_native_dpdk -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:28:35.048 ************************************ 00:28:35.048 END TEST build_native_dpdk 00:28:35.048 14:41:54 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:28:35.048 ************************************ 00:28:35.307 14:41:54 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:28:35.307 14:41:54 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:28:35.307 14:41:54 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:28:35.307 14:41:54 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:28:35.307 14:41:54 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:28:35.307 14:41:54 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:28:35.307 14:41:54 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:28:35.308 14:41:54 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang --with-shared 00:28:35.308 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:28:35.567 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:28:35.567 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:28:35.567 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:28:35.825 Using 'verbs' RDMA provider 00:28:52.101 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:29:04.305 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:29:04.305 go version go1.21.1 linux/amd64 00:29:04.305 Creating mk/config.mk...done. 00:29:04.305 Creating mk/cc.flags.mk...done. 00:29:04.305 Type 'make' to build. 00:29:04.305 14:42:22 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:29:04.305 14:42:22 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:29:04.305 14:42:22 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:29:04.305 14:42:22 -- common/autotest_common.sh@10 -- $ set +x 00:29:04.305 ************************************ 00:29:04.305 START TEST make 00:29:04.305 ************************************ 00:29:04.305 14:42:22 make -- common/autotest_common.sh@1121 -- $ make -j10 00:29:04.305 make[1]: Nothing to be done for 'all'. 00:29:30.877 CC lib/log/log_flags.o 00:29:30.877 CC lib/ut/ut.o 00:29:30.877 CC lib/log/log.o 00:29:30.877 CC lib/log/log_deprecated.o 00:29:30.877 CC lib/ut_mock/mock.o 00:29:30.877 LIB libspdk_ut.a 00:29:30.877 LIB libspdk_log.a 00:29:30.877 LIB libspdk_ut_mock.a 00:29:30.877 SO libspdk_ut.so.2.0 00:29:30.877 SO libspdk_log.so.7.0 00:29:30.877 SO libspdk_ut_mock.so.6.0 00:29:30.877 SYMLINK libspdk_ut.so 00:29:30.877 SYMLINK libspdk_log.so 00:29:30.877 SYMLINK libspdk_ut_mock.so 00:29:30.877 CXX lib/trace_parser/trace.o 00:29:30.877 CC lib/util/base64.o 00:29:30.877 CC lib/util/cpuset.o 00:29:30.877 CC lib/util/bit_array.o 00:29:30.877 CC lib/util/crc16.o 00:29:30.877 CC lib/util/crc32.o 00:29:30.877 CC lib/util/crc32c.o 00:29:30.877 CC lib/ioat/ioat.o 00:29:30.877 CC lib/dma/dma.o 00:29:30.877 CC lib/vfio_user/host/vfio_user_pci.o 00:29:30.877 CC lib/util/crc32_ieee.o 00:29:30.877 CC lib/util/crc64.o 00:29:30.877 CC lib/vfio_user/host/vfio_user.o 00:29:30.877 CC lib/util/dif.o 00:29:30.877 CC lib/util/fd.o 00:29:30.877 CC lib/util/file.o 00:29:30.877 LIB libspdk_dma.a 00:29:30.877 SO libspdk_dma.so.4.0 00:29:30.877 CC lib/util/hexlify.o 00:29:30.877 SYMLINK libspdk_dma.so 00:29:30.877 CC lib/util/iov.o 00:29:30.877 LIB libspdk_ioat.a 00:29:30.877 CC lib/util/math.o 00:29:30.877 SO libspdk_ioat.so.7.0 00:29:30.877 CC lib/util/pipe.o 00:29:30.877 CC lib/util/strerror_tls.o 00:29:30.877 CC lib/util/string.o 00:29:30.877 LIB libspdk_vfio_user.a 00:29:30.877 SYMLINK libspdk_ioat.so 00:29:30.877 CC lib/util/uuid.o 00:29:30.877 SO libspdk_vfio_user.so.5.0 00:29:30.877 CC lib/util/fd_group.o 00:29:30.877 SYMLINK libspdk_vfio_user.so 00:29:30.877 CC lib/util/xor.o 00:29:30.877 CC lib/util/zipf.o 00:29:30.877 LIB libspdk_util.a 00:29:30.877 SO libspdk_util.so.9.0 00:29:30.877 LIB libspdk_trace_parser.a 00:29:30.877 SO libspdk_trace_parser.so.5.0 00:29:30.877 SYMLINK libspdk_util.so 00:29:30.877 SYMLINK libspdk_trace_parser.so 00:29:30.877 CC lib/idxd/idxd.o 00:29:30.877 CC lib/idxd/idxd_kernel.o 00:29:30.877 CC lib/idxd/idxd_user.o 00:29:30.877 CC lib/json/json_parse.o 00:29:30.877 CC lib/json/json_util.o 00:29:30.877 CC lib/json/json_write.o 00:29:30.877 CC lib/vmd/vmd.o 00:29:30.877 CC lib/env_dpdk/env.o 00:29:30.877 CC lib/rdma/common.o 00:29:30.877 CC lib/conf/conf.o 00:29:30.877 CC lib/rdma/rdma_verbs.o 00:29:30.877 CC lib/env_dpdk/memory.o 00:29:30.877 CC lib/env_dpdk/pci.o 00:29:30.877 CC lib/env_dpdk/init.o 00:29:30.877 LIB libspdk_conf.a 00:29:30.877 SO libspdk_conf.so.6.0 00:29:30.877 LIB libspdk_json.a 00:29:30.877 CC lib/env_dpdk/threads.o 00:29:30.877 SO libspdk_json.so.6.0 00:29:30.877 SYMLINK libspdk_conf.so 00:29:30.877 CC lib/env_dpdk/pci_ioat.o 00:29:30.877 LIB libspdk_rdma.a 00:29:30.877 SYMLINK libspdk_json.so 00:29:30.877 CC lib/env_dpdk/pci_virtio.o 00:29:30.877 SO libspdk_rdma.so.6.0 00:29:30.877 CC lib/env_dpdk/pci_vmd.o 00:29:30.877 SYMLINK libspdk_rdma.so 00:29:30.877 CC lib/env_dpdk/pci_idxd.o 00:29:30.877 CC lib/env_dpdk/pci_event.o 00:29:30.877 LIB libspdk_idxd.a 00:29:30.877 CC lib/env_dpdk/sigbus_handler.o 00:29:30.877 SO libspdk_idxd.so.12.0 00:29:30.877 CC lib/vmd/led.o 00:29:30.877 CC lib/env_dpdk/pci_dpdk.o 00:29:30.877 CC lib/env_dpdk/pci_dpdk_2207.o 00:29:30.877 CC lib/env_dpdk/pci_dpdk_2211.o 00:29:30.877 SYMLINK libspdk_idxd.so 00:29:30.877 LIB libspdk_vmd.a 00:29:30.877 SO libspdk_vmd.so.6.0 00:29:30.877 CC lib/jsonrpc/jsonrpc_server.o 00:29:30.877 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:29:30.877 CC lib/jsonrpc/jsonrpc_client.o 00:29:30.877 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:29:30.877 SYMLINK libspdk_vmd.so 00:29:30.877 LIB libspdk_jsonrpc.a 00:29:30.877 SO libspdk_jsonrpc.so.6.0 00:29:30.877 SYMLINK libspdk_jsonrpc.so 00:29:30.877 LIB libspdk_env_dpdk.a 00:29:30.877 SO libspdk_env_dpdk.so.14.0 00:29:30.877 SYMLINK libspdk_env_dpdk.so 00:29:30.877 CC lib/rpc/rpc.o 00:29:31.137 LIB libspdk_rpc.a 00:29:31.137 SO libspdk_rpc.so.6.0 00:29:31.137 SYMLINK libspdk_rpc.so 00:29:31.704 CC lib/notify/notify.o 00:29:31.704 CC lib/notify/notify_rpc.o 00:29:31.704 CC lib/trace/trace.o 00:29:31.704 CC lib/trace/trace_flags.o 00:29:31.704 CC lib/trace/trace_rpc.o 00:29:31.704 CC lib/keyring/keyring.o 00:29:31.704 CC lib/keyring/keyring_rpc.o 00:29:31.704 LIB libspdk_notify.a 00:29:31.704 SO libspdk_notify.so.6.0 00:29:31.704 LIB libspdk_trace.a 00:29:31.704 LIB libspdk_keyring.a 00:29:31.704 SYMLINK libspdk_notify.so 00:29:31.704 SO libspdk_trace.so.10.0 00:29:31.962 SO libspdk_keyring.so.1.0 00:29:31.962 SYMLINK libspdk_trace.so 00:29:31.963 SYMLINK libspdk_keyring.so 00:29:32.222 CC lib/thread/thread.o 00:29:32.222 CC lib/thread/iobuf.o 00:29:32.222 CC lib/sock/sock.o 00:29:32.222 CC lib/sock/sock_rpc.o 00:29:32.482 LIB libspdk_sock.a 00:29:32.741 SO libspdk_sock.so.9.0 00:29:32.741 SYMLINK libspdk_sock.so 00:29:33.007 CC lib/nvme/nvme_ctrlr_cmd.o 00:29:33.007 CC lib/nvme/nvme_ctrlr.o 00:29:33.007 CC lib/nvme/nvme_fabric.o 00:29:33.007 CC lib/nvme/nvme_ns_cmd.o 00:29:33.007 CC lib/nvme/nvme_ns.o 00:29:33.007 CC lib/nvme/nvme_qpair.o 00:29:33.007 CC lib/nvme/nvme_pcie_common.o 00:29:33.007 CC lib/nvme/nvme_pcie.o 00:29:33.007 CC lib/nvme/nvme.o 00:29:33.572 LIB libspdk_thread.a 00:29:33.572 SO libspdk_thread.so.10.0 00:29:33.572 SYMLINK libspdk_thread.so 00:29:33.572 CC lib/nvme/nvme_quirks.o 00:29:33.831 CC lib/nvme/nvme_transport.o 00:29:33.831 CC lib/nvme/nvme_discovery.o 00:29:33.831 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:29:33.831 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:29:33.831 CC lib/nvme/nvme_tcp.o 00:29:33.831 CC lib/nvme/nvme_opal.o 00:29:33.831 CC lib/nvme/nvme_io_msg.o 00:29:34.089 CC lib/nvme/nvme_poll_group.o 00:29:34.347 CC lib/nvme/nvme_zns.o 00:29:34.606 CC lib/accel/accel.o 00:29:34.606 CC lib/accel/accel_rpc.o 00:29:34.606 CC lib/blob/blobstore.o 00:29:34.606 CC lib/accel/accel_sw.o 00:29:34.606 CC lib/init/json_config.o 00:29:34.606 CC lib/virtio/virtio.o 00:29:34.606 CC lib/virtio/virtio_vhost_user.o 00:29:34.864 CC lib/virtio/virtio_vfio_user.o 00:29:34.864 CC lib/nvme/nvme_stubs.o 00:29:34.864 CC lib/nvme/nvme_auth.o 00:29:34.864 CC lib/init/subsystem.o 00:29:34.864 CC lib/virtio/virtio_pci.o 00:29:34.864 CC lib/blob/request.o 00:29:34.864 CC lib/init/subsystem_rpc.o 00:29:35.123 CC lib/init/rpc.o 00:29:35.123 CC lib/nvme/nvme_cuse.o 00:29:35.123 CC lib/blob/zeroes.o 00:29:35.123 LIB libspdk_virtio.a 00:29:35.123 LIB libspdk_init.a 00:29:35.123 SO libspdk_virtio.so.7.0 00:29:35.123 SO libspdk_init.so.5.0 00:29:35.123 SYMLINK libspdk_virtio.so 00:29:35.381 CC lib/blob/blob_bs_dev.o 00:29:35.381 CC lib/nvme/nvme_rdma.o 00:29:35.381 SYMLINK libspdk_init.so 00:29:35.381 LIB libspdk_accel.a 00:29:35.381 SO libspdk_accel.so.15.0 00:29:35.381 CC lib/event/app.o 00:29:35.381 CC lib/event/app_rpc.o 00:29:35.381 CC lib/event/reactor.o 00:29:35.381 SYMLINK libspdk_accel.so 00:29:35.381 CC lib/event/log_rpc.o 00:29:35.381 CC lib/event/scheduler_static.o 00:29:35.639 CC lib/bdev/bdev_rpc.o 00:29:35.639 CC lib/bdev/bdev.o 00:29:35.639 CC lib/bdev/bdev_zone.o 00:29:35.639 CC lib/bdev/part.o 00:29:35.639 CC lib/bdev/scsi_nvme.o 00:29:35.897 LIB libspdk_event.a 00:29:35.897 SO libspdk_event.so.13.0 00:29:36.156 SYMLINK libspdk_event.so 00:29:36.414 LIB libspdk_nvme.a 00:29:36.673 SO libspdk_nvme.so.13.0 00:29:36.931 SYMLINK libspdk_nvme.so 00:29:37.189 LIB libspdk_blob.a 00:29:37.448 SO libspdk_blob.so.11.0 00:29:37.448 SYMLINK libspdk_blob.so 00:29:37.707 CC lib/lvol/lvol.o 00:29:37.707 CC lib/blobfs/blobfs.o 00:29:37.707 CC lib/blobfs/tree.o 00:29:38.276 LIB libspdk_bdev.a 00:29:38.276 SO libspdk_bdev.so.15.0 00:29:38.276 SYMLINK libspdk_bdev.so 00:29:38.535 CC lib/nbd/nbd.o 00:29:38.535 CC lib/nbd/nbd_rpc.o 00:29:38.535 CC lib/scsi/dev.o 00:29:38.535 CC lib/scsi/lun.o 00:29:38.535 CC lib/ftl/ftl_core.o 00:29:38.535 CC lib/scsi/port.o 00:29:38.535 CC lib/ublk/ublk.o 00:29:38.535 CC lib/nvmf/ctrlr.o 00:29:38.535 LIB libspdk_blobfs.a 00:29:38.535 SO libspdk_blobfs.so.10.0 00:29:38.535 LIB libspdk_lvol.a 00:29:38.794 SO libspdk_lvol.so.10.0 00:29:38.794 SYMLINK libspdk_blobfs.so 00:29:38.794 CC lib/nvmf/ctrlr_discovery.o 00:29:38.794 CC lib/ublk/ublk_rpc.o 00:29:38.794 CC lib/scsi/scsi.o 00:29:38.794 SYMLINK libspdk_lvol.so 00:29:38.794 CC lib/scsi/scsi_bdev.o 00:29:38.794 CC lib/scsi/scsi_pr.o 00:29:38.794 CC lib/scsi/scsi_rpc.o 00:29:38.794 CC lib/ftl/ftl_init.o 00:29:38.794 CC lib/scsi/task.o 00:29:38.794 CC lib/ftl/ftl_layout.o 00:29:39.054 LIB libspdk_nbd.a 00:29:39.054 SO libspdk_nbd.so.7.0 00:29:39.054 SYMLINK libspdk_nbd.so 00:29:39.054 CC lib/ftl/ftl_debug.o 00:29:39.054 CC lib/ftl/ftl_io.o 00:29:39.054 CC lib/nvmf/ctrlr_bdev.o 00:29:39.054 CC lib/nvmf/subsystem.o 00:29:39.054 CC lib/ftl/ftl_sb.o 00:29:39.054 CC lib/nvmf/nvmf.o 00:29:39.314 LIB libspdk_scsi.a 00:29:39.314 LIB libspdk_ublk.a 00:29:39.314 CC lib/ftl/ftl_l2p.o 00:29:39.314 SO libspdk_ublk.so.3.0 00:29:39.314 SO libspdk_scsi.so.9.0 00:29:39.314 CC lib/ftl/ftl_l2p_flat.o 00:29:39.314 CC lib/ftl/ftl_nv_cache.o 00:29:39.314 SYMLINK libspdk_ublk.so 00:29:39.314 CC lib/nvmf/nvmf_rpc.o 00:29:39.314 SYMLINK libspdk_scsi.so 00:29:39.314 CC lib/ftl/ftl_band.o 00:29:39.314 CC lib/ftl/ftl_band_ops.o 00:29:39.314 CC lib/nvmf/transport.o 00:29:39.573 CC lib/nvmf/tcp.o 00:29:39.573 CC lib/ftl/ftl_writer.o 00:29:39.832 CC lib/nvmf/stubs.o 00:29:39.832 CC lib/nvmf/mdns_server.o 00:29:39.832 CC lib/ftl/ftl_rq.o 00:29:40.090 CC lib/ftl/ftl_reloc.o 00:29:40.090 CC lib/ftl/ftl_l2p_cache.o 00:29:40.090 CC lib/ftl/ftl_p2l.o 00:29:40.090 CC lib/nvmf/rdma.o 00:29:40.090 CC lib/nvmf/auth.o 00:29:40.090 CC lib/ftl/mngt/ftl_mngt.o 00:29:40.368 CC lib/vhost/vhost.o 00:29:40.368 CC lib/iscsi/conn.o 00:29:40.368 CC lib/iscsi/init_grp.o 00:29:40.368 CC lib/iscsi/iscsi.o 00:29:40.627 CC lib/iscsi/md5.o 00:29:40.627 CC lib/iscsi/param.o 00:29:40.627 CC lib/iscsi/portal_grp.o 00:29:40.627 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:29:40.627 CC lib/iscsi/tgt_node.o 00:29:40.885 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:29:40.885 CC lib/iscsi/iscsi_subsystem.o 00:29:40.885 CC lib/vhost/vhost_rpc.o 00:29:40.885 CC lib/vhost/vhost_scsi.o 00:29:40.885 CC lib/iscsi/iscsi_rpc.o 00:29:40.885 CC lib/iscsi/task.o 00:29:40.885 CC lib/ftl/mngt/ftl_mngt_startup.o 00:29:41.144 CC lib/ftl/mngt/ftl_mngt_md.o 00:29:41.144 CC lib/ftl/mngt/ftl_mngt_misc.o 00:29:41.144 CC lib/vhost/vhost_blk.o 00:29:41.144 CC lib/vhost/rte_vhost_user.o 00:29:41.144 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:29:41.402 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:29:41.402 CC lib/ftl/mngt/ftl_mngt_band.o 00:29:41.402 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:29:41.402 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:29:41.402 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:29:41.402 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:29:41.660 LIB libspdk_iscsi.a 00:29:41.660 CC lib/ftl/utils/ftl_conf.o 00:29:41.660 CC lib/ftl/utils/ftl_md.o 00:29:41.660 CC lib/ftl/utils/ftl_mempool.o 00:29:41.660 SO libspdk_iscsi.so.8.0 00:29:41.660 CC lib/ftl/utils/ftl_bitmap.o 00:29:41.660 CC lib/ftl/utils/ftl_property.o 00:29:41.919 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:29:41.919 SYMLINK libspdk_iscsi.so 00:29:41.919 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:29:41.919 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:29:41.919 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:29:41.919 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:29:41.919 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:29:41.919 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:29:42.179 CC lib/ftl/upgrade/ftl_sb_v3.o 00:29:42.179 CC lib/ftl/upgrade/ftl_sb_v5.o 00:29:42.179 CC lib/ftl/nvc/ftl_nvc_dev.o 00:29:42.179 LIB libspdk_nvmf.a 00:29:42.179 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:29:42.179 CC lib/ftl/base/ftl_base_dev.o 00:29:42.179 CC lib/ftl/base/ftl_base_bdev.o 00:29:42.179 LIB libspdk_vhost.a 00:29:42.179 CC lib/ftl/ftl_trace.o 00:29:42.179 SO libspdk_nvmf.so.18.0 00:29:42.179 SO libspdk_vhost.so.8.0 00:29:42.179 SYMLINK libspdk_vhost.so 00:29:42.438 SYMLINK libspdk_nvmf.so 00:29:42.438 LIB libspdk_ftl.a 00:29:42.697 SO libspdk_ftl.so.9.0 00:29:42.956 SYMLINK libspdk_ftl.so 00:29:43.215 CC module/env_dpdk/env_dpdk_rpc.o 00:29:43.474 CC module/scheduler/gscheduler/gscheduler.o 00:29:43.474 CC module/keyring/file/keyring.o 00:29:43.474 CC module/keyring/linux/keyring.o 00:29:43.474 CC module/accel/error/accel_error.o 00:29:43.474 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:29:43.474 CC module/sock/posix/posix.o 00:29:43.474 CC module/accel/ioat/accel_ioat.o 00:29:43.474 CC module/scheduler/dynamic/scheduler_dynamic.o 00:29:43.474 CC module/blob/bdev/blob_bdev.o 00:29:43.474 LIB libspdk_env_dpdk_rpc.a 00:29:43.474 SO libspdk_env_dpdk_rpc.so.6.0 00:29:43.474 CC module/keyring/file/keyring_rpc.o 00:29:43.474 LIB libspdk_scheduler_gscheduler.a 00:29:43.474 CC module/keyring/linux/keyring_rpc.o 00:29:43.474 LIB libspdk_scheduler_dpdk_governor.a 00:29:43.474 SO libspdk_scheduler_gscheduler.so.4.0 00:29:43.474 SYMLINK libspdk_env_dpdk_rpc.so 00:29:43.474 CC module/accel/error/accel_error_rpc.o 00:29:43.474 SO libspdk_scheduler_dpdk_governor.so.4.0 00:29:43.474 LIB libspdk_scheduler_dynamic.a 00:29:43.732 CC module/accel/ioat/accel_ioat_rpc.o 00:29:43.732 SYMLINK libspdk_scheduler_dpdk_governor.so 00:29:43.732 SYMLINK libspdk_scheduler_gscheduler.so 00:29:43.732 SO libspdk_scheduler_dynamic.so.4.0 00:29:43.732 LIB libspdk_keyring_linux.a 00:29:43.732 LIB libspdk_keyring_file.a 00:29:43.732 LIB libspdk_blob_bdev.a 00:29:43.732 SYMLINK libspdk_scheduler_dynamic.so 00:29:43.732 SO libspdk_keyring_linux.so.1.0 00:29:43.732 SO libspdk_keyring_file.so.1.0 00:29:43.732 SO libspdk_blob_bdev.so.11.0 00:29:43.732 LIB libspdk_accel_error.a 00:29:43.732 LIB libspdk_accel_ioat.a 00:29:43.732 SO libspdk_accel_error.so.2.0 00:29:43.732 SYMLINK libspdk_keyring_file.so 00:29:43.732 SYMLINK libspdk_keyring_linux.so 00:29:43.732 SYMLINK libspdk_blob_bdev.so 00:29:43.732 SO libspdk_accel_ioat.so.6.0 00:29:43.732 SYMLINK libspdk_accel_error.so 00:29:43.732 CC module/accel/iaa/accel_iaa.o 00:29:43.732 CC module/accel/iaa/accel_iaa_rpc.o 00:29:43.732 CC module/accel/dsa/accel_dsa.o 00:29:43.732 CC module/accel/dsa/accel_dsa_rpc.o 00:29:43.732 SYMLINK libspdk_accel_ioat.so 00:29:43.992 LIB libspdk_accel_iaa.a 00:29:43.992 CC module/blobfs/bdev/blobfs_bdev.o 00:29:43.992 CC module/bdev/error/vbdev_error.o 00:29:43.992 CC module/bdev/delay/vbdev_delay.o 00:29:43.992 CC module/bdev/lvol/vbdev_lvol.o 00:29:43.992 CC module/bdev/gpt/gpt.o 00:29:43.992 SO libspdk_accel_iaa.so.3.0 00:29:43.992 LIB libspdk_accel_dsa.a 00:29:43.992 SO libspdk_accel_dsa.so.5.0 00:29:43.992 LIB libspdk_sock_posix.a 00:29:43.992 CC module/bdev/malloc/bdev_malloc.o 00:29:43.992 SYMLINK libspdk_accel_iaa.so 00:29:43.992 CC module/bdev/null/bdev_null.o 00:29:44.251 CC module/bdev/malloc/bdev_malloc_rpc.o 00:29:44.251 SO libspdk_sock_posix.so.6.0 00:29:44.251 SYMLINK libspdk_accel_dsa.so 00:29:44.251 CC module/bdev/delay/vbdev_delay_rpc.o 00:29:44.251 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:29:44.251 CC module/bdev/gpt/vbdev_gpt.o 00:29:44.251 SYMLINK libspdk_sock_posix.so 00:29:44.251 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:29:44.251 CC module/bdev/error/vbdev_error_rpc.o 00:29:44.251 LIB libspdk_blobfs_bdev.a 00:29:44.251 CC module/bdev/null/bdev_null_rpc.o 00:29:44.251 LIB libspdk_bdev_delay.a 00:29:44.510 SO libspdk_blobfs_bdev.so.6.0 00:29:44.510 SO libspdk_bdev_delay.so.6.0 00:29:44.510 LIB libspdk_bdev_error.a 00:29:44.510 LIB libspdk_bdev_malloc.a 00:29:44.510 CC module/bdev/nvme/bdev_nvme.o 00:29:44.510 CC module/bdev/passthru/vbdev_passthru.o 00:29:44.510 SYMLINK libspdk_blobfs_bdev.so 00:29:44.510 CC module/bdev/nvme/bdev_nvme_rpc.o 00:29:44.510 SO libspdk_bdev_error.so.6.0 00:29:44.510 SO libspdk_bdev_malloc.so.6.0 00:29:44.510 SYMLINK libspdk_bdev_delay.so 00:29:44.510 CC module/bdev/nvme/nvme_rpc.o 00:29:44.510 CC module/bdev/nvme/bdev_mdns_client.o 00:29:44.510 LIB libspdk_bdev_gpt.a 00:29:44.510 SYMLINK libspdk_bdev_error.so 00:29:44.510 LIB libspdk_bdev_lvol.a 00:29:44.510 CC module/bdev/nvme/vbdev_opal.o 00:29:44.510 SO libspdk_bdev_gpt.so.6.0 00:29:44.510 SYMLINK libspdk_bdev_malloc.so 00:29:44.510 SO libspdk_bdev_lvol.so.6.0 00:29:44.510 LIB libspdk_bdev_null.a 00:29:44.510 SO libspdk_bdev_null.so.6.0 00:29:44.510 SYMLINK libspdk_bdev_gpt.so 00:29:44.770 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:29:44.770 SYMLINK libspdk_bdev_lvol.so 00:29:44.770 SYMLINK libspdk_bdev_null.so 00:29:44.770 CC module/bdev/raid/bdev_raid.o 00:29:44.770 CC module/bdev/nvme/vbdev_opal_rpc.o 00:29:44.770 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:29:44.770 CC module/bdev/split/vbdev_split.o 00:29:44.770 LIB libspdk_bdev_passthru.a 00:29:44.770 CC module/bdev/split/vbdev_split_rpc.o 00:29:44.770 SO libspdk_bdev_passthru.so.6.0 00:29:44.770 CC module/bdev/zone_block/vbdev_zone_block.o 00:29:45.051 CC module/bdev/raid/bdev_raid_rpc.o 00:29:45.051 SYMLINK libspdk_bdev_passthru.so 00:29:45.051 CC module/bdev/raid/bdev_raid_sb.o 00:29:45.051 CC module/bdev/aio/bdev_aio.o 00:29:45.051 CC module/bdev/aio/bdev_aio_rpc.o 00:29:45.051 CC module/bdev/raid/raid0.o 00:29:45.051 LIB libspdk_bdev_split.a 00:29:45.051 SO libspdk_bdev_split.so.6.0 00:29:45.051 SYMLINK libspdk_bdev_split.so 00:29:45.051 CC module/bdev/ftl/bdev_ftl.o 00:29:45.310 CC module/bdev/raid/raid1.o 00:29:45.310 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:29:45.310 CC module/bdev/raid/concat.o 00:29:45.310 LIB libspdk_bdev_aio.a 00:29:45.310 CC module/bdev/ftl/bdev_ftl_rpc.o 00:29:45.310 SO libspdk_bdev_aio.so.6.0 00:29:45.310 CC module/bdev/iscsi/bdev_iscsi.o 00:29:45.310 CC module/bdev/virtio/bdev_virtio_scsi.o 00:29:45.310 LIB libspdk_bdev_zone_block.a 00:29:45.310 SYMLINK libspdk_bdev_aio.so 00:29:45.310 SO libspdk_bdev_zone_block.so.6.0 00:29:45.310 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:29:45.569 CC module/bdev/virtio/bdev_virtio_blk.o 00:29:45.569 CC module/bdev/virtio/bdev_virtio_rpc.o 00:29:45.569 SYMLINK libspdk_bdev_zone_block.so 00:29:45.569 LIB libspdk_bdev_ftl.a 00:29:45.569 SO libspdk_bdev_ftl.so.6.0 00:29:45.569 SYMLINK libspdk_bdev_ftl.so 00:29:45.569 LIB libspdk_bdev_raid.a 00:29:45.569 LIB libspdk_bdev_iscsi.a 00:29:45.569 SO libspdk_bdev_raid.so.6.0 00:29:45.829 SO libspdk_bdev_iscsi.so.6.0 00:29:45.829 SYMLINK libspdk_bdev_raid.so 00:29:45.829 SYMLINK libspdk_bdev_iscsi.so 00:29:45.829 LIB libspdk_bdev_virtio.a 00:29:46.088 SO libspdk_bdev_virtio.so.6.0 00:29:46.088 SYMLINK libspdk_bdev_virtio.so 00:29:46.655 LIB libspdk_bdev_nvme.a 00:29:46.655 SO libspdk_bdev_nvme.so.7.0 00:29:46.655 SYMLINK libspdk_bdev_nvme.so 00:29:47.257 CC module/event/subsystems/scheduler/scheduler.o 00:29:47.257 CC module/event/subsystems/sock/sock.o 00:29:47.257 CC module/event/subsystems/vmd/vmd.o 00:29:47.257 CC module/event/subsystems/vmd/vmd_rpc.o 00:29:47.257 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:29:47.257 CC module/event/subsystems/keyring/keyring.o 00:29:47.257 CC module/event/subsystems/iobuf/iobuf.o 00:29:47.257 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:29:47.516 LIB libspdk_event_sock.a 00:29:47.516 LIB libspdk_event_vhost_blk.a 00:29:47.516 LIB libspdk_event_keyring.a 00:29:47.516 LIB libspdk_event_scheduler.a 00:29:47.516 LIB libspdk_event_vmd.a 00:29:47.516 SO libspdk_event_sock.so.5.0 00:29:47.516 SO libspdk_event_scheduler.so.4.0 00:29:47.516 SO libspdk_event_vhost_blk.so.3.0 00:29:47.516 LIB libspdk_event_iobuf.a 00:29:47.516 SO libspdk_event_keyring.so.1.0 00:29:47.516 SO libspdk_event_iobuf.so.3.0 00:29:47.516 SO libspdk_event_vmd.so.6.0 00:29:47.516 SYMLINK libspdk_event_sock.so 00:29:47.516 SYMLINK libspdk_event_scheduler.so 00:29:47.516 SYMLINK libspdk_event_vhost_blk.so 00:29:47.516 SYMLINK libspdk_event_keyring.so 00:29:47.516 SYMLINK libspdk_event_iobuf.so 00:29:47.516 SYMLINK libspdk_event_vmd.so 00:29:47.774 CC module/event/subsystems/accel/accel.o 00:29:48.032 LIB libspdk_event_accel.a 00:29:48.032 SO libspdk_event_accel.so.6.0 00:29:48.032 SYMLINK libspdk_event_accel.so 00:29:48.599 CC module/event/subsystems/bdev/bdev.o 00:29:48.599 LIB libspdk_event_bdev.a 00:29:48.858 SO libspdk_event_bdev.so.6.0 00:29:48.858 SYMLINK libspdk_event_bdev.so 00:29:49.117 CC module/event/subsystems/scsi/scsi.o 00:29:49.117 CC module/event/subsystems/nbd/nbd.o 00:29:49.117 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:29:49.117 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:29:49.117 CC module/event/subsystems/ublk/ublk.o 00:29:49.117 LIB libspdk_event_nbd.a 00:29:49.117 SO libspdk_event_nbd.so.6.0 00:29:49.117 LIB libspdk_event_scsi.a 00:29:49.117 LIB libspdk_event_ublk.a 00:29:49.376 SO libspdk_event_scsi.so.6.0 00:29:49.376 SO libspdk_event_ublk.so.3.0 00:29:49.376 SYMLINK libspdk_event_nbd.so 00:29:49.376 SYMLINK libspdk_event_ublk.so 00:29:49.376 SYMLINK libspdk_event_scsi.so 00:29:49.376 LIB libspdk_event_nvmf.a 00:29:49.376 SO libspdk_event_nvmf.so.6.0 00:29:49.376 SYMLINK libspdk_event_nvmf.so 00:29:49.657 CC module/event/subsystems/iscsi/iscsi.o 00:29:49.657 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:29:49.916 LIB libspdk_event_vhost_scsi.a 00:29:49.916 LIB libspdk_event_iscsi.a 00:29:49.916 SO libspdk_event_vhost_scsi.so.3.0 00:29:49.916 SO libspdk_event_iscsi.so.6.0 00:29:49.916 SYMLINK libspdk_event_vhost_scsi.so 00:29:49.916 SYMLINK libspdk_event_iscsi.so 00:29:50.176 SO libspdk.so.6.0 00:29:50.176 SYMLINK libspdk.so 00:29:50.434 TEST_HEADER include/spdk/accel.h 00:29:50.434 TEST_HEADER include/spdk/accel_module.h 00:29:50.434 TEST_HEADER include/spdk/assert.h 00:29:50.434 CXX app/trace/trace.o 00:29:50.434 TEST_HEADER include/spdk/barrier.h 00:29:50.434 TEST_HEADER include/spdk/base64.h 00:29:50.434 TEST_HEADER include/spdk/bdev.h 00:29:50.434 TEST_HEADER include/spdk/bdev_module.h 00:29:50.434 TEST_HEADER include/spdk/bdev_zone.h 00:29:50.434 TEST_HEADER include/spdk/bit_array.h 00:29:50.434 TEST_HEADER include/spdk/bit_pool.h 00:29:50.434 TEST_HEADER include/spdk/blob_bdev.h 00:29:50.434 TEST_HEADER include/spdk/blobfs_bdev.h 00:29:50.434 TEST_HEADER include/spdk/blobfs.h 00:29:50.434 TEST_HEADER include/spdk/blob.h 00:29:50.434 TEST_HEADER include/spdk/conf.h 00:29:50.434 TEST_HEADER include/spdk/config.h 00:29:50.434 TEST_HEADER include/spdk/cpuset.h 00:29:50.434 TEST_HEADER include/spdk/crc16.h 00:29:50.434 TEST_HEADER include/spdk/crc32.h 00:29:50.434 TEST_HEADER include/spdk/crc64.h 00:29:50.434 TEST_HEADER include/spdk/dif.h 00:29:50.434 TEST_HEADER include/spdk/dma.h 00:29:50.434 TEST_HEADER include/spdk/endian.h 00:29:50.434 TEST_HEADER include/spdk/env_dpdk.h 00:29:50.434 TEST_HEADER include/spdk/env.h 00:29:50.434 TEST_HEADER include/spdk/event.h 00:29:50.434 TEST_HEADER include/spdk/fd_group.h 00:29:50.434 TEST_HEADER include/spdk/fd.h 00:29:50.434 TEST_HEADER include/spdk/file.h 00:29:50.434 TEST_HEADER include/spdk/ftl.h 00:29:50.434 TEST_HEADER include/spdk/gpt_spec.h 00:29:50.434 TEST_HEADER include/spdk/hexlify.h 00:29:50.434 TEST_HEADER include/spdk/histogram_data.h 00:29:50.434 CC test/event/event_perf/event_perf.o 00:29:50.434 TEST_HEADER include/spdk/idxd.h 00:29:50.434 TEST_HEADER include/spdk/idxd_spec.h 00:29:50.434 TEST_HEADER include/spdk/init.h 00:29:50.434 TEST_HEADER include/spdk/ioat.h 00:29:50.434 TEST_HEADER include/spdk/ioat_spec.h 00:29:50.434 TEST_HEADER include/spdk/iscsi_spec.h 00:29:50.434 TEST_HEADER include/spdk/json.h 00:29:50.434 CC examples/accel/perf/accel_perf.o 00:29:50.434 TEST_HEADER include/spdk/jsonrpc.h 00:29:50.434 TEST_HEADER include/spdk/keyring.h 00:29:50.434 TEST_HEADER include/spdk/keyring_module.h 00:29:50.434 CC test/blobfs/mkfs/mkfs.o 00:29:50.434 TEST_HEADER include/spdk/likely.h 00:29:50.434 TEST_HEADER include/spdk/log.h 00:29:50.434 TEST_HEADER include/spdk/lvol.h 00:29:50.434 TEST_HEADER include/spdk/memory.h 00:29:50.434 TEST_HEADER include/spdk/mmio.h 00:29:50.434 TEST_HEADER include/spdk/nbd.h 00:29:50.434 TEST_HEADER include/spdk/notify.h 00:29:50.434 TEST_HEADER include/spdk/nvme.h 00:29:50.434 TEST_HEADER include/spdk/nvme_intel.h 00:29:50.434 CC test/accel/dif/dif.o 00:29:50.434 TEST_HEADER include/spdk/nvme_ocssd.h 00:29:50.693 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:29:50.693 TEST_HEADER include/spdk/nvme_spec.h 00:29:50.693 TEST_HEADER include/spdk/nvme_zns.h 00:29:50.693 CC test/app/bdev_svc/bdev_svc.o 00:29:50.693 TEST_HEADER include/spdk/nvmf_cmd.h 00:29:50.693 CC test/dma/test_dma/test_dma.o 00:29:50.693 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:29:50.693 TEST_HEADER include/spdk/nvmf.h 00:29:50.693 TEST_HEADER include/spdk/nvmf_spec.h 00:29:50.693 TEST_HEADER include/spdk/nvmf_transport.h 00:29:50.693 TEST_HEADER include/spdk/opal.h 00:29:50.693 TEST_HEADER include/spdk/opal_spec.h 00:29:50.693 TEST_HEADER include/spdk/pci_ids.h 00:29:50.693 TEST_HEADER include/spdk/pipe.h 00:29:50.693 TEST_HEADER include/spdk/queue.h 00:29:50.693 TEST_HEADER include/spdk/reduce.h 00:29:50.693 TEST_HEADER include/spdk/rpc.h 00:29:50.693 TEST_HEADER include/spdk/scheduler.h 00:29:50.693 TEST_HEADER include/spdk/scsi.h 00:29:50.693 TEST_HEADER include/spdk/scsi_spec.h 00:29:50.693 TEST_HEADER include/spdk/sock.h 00:29:50.693 TEST_HEADER include/spdk/stdinc.h 00:29:50.693 TEST_HEADER include/spdk/string.h 00:29:50.693 CC test/bdev/bdevio/bdevio.o 00:29:50.693 TEST_HEADER include/spdk/thread.h 00:29:50.693 TEST_HEADER include/spdk/trace.h 00:29:50.693 TEST_HEADER include/spdk/trace_parser.h 00:29:50.693 TEST_HEADER include/spdk/tree.h 00:29:50.693 TEST_HEADER include/spdk/ublk.h 00:29:50.693 CC test/env/mem_callbacks/mem_callbacks.o 00:29:50.693 TEST_HEADER include/spdk/util.h 00:29:50.693 TEST_HEADER include/spdk/uuid.h 00:29:50.693 TEST_HEADER include/spdk/version.h 00:29:50.693 TEST_HEADER include/spdk/vfio_user_pci.h 00:29:50.693 TEST_HEADER include/spdk/vfio_user_spec.h 00:29:50.693 TEST_HEADER include/spdk/vhost.h 00:29:50.693 TEST_HEADER include/spdk/vmd.h 00:29:50.693 TEST_HEADER include/spdk/xor.h 00:29:50.693 TEST_HEADER include/spdk/zipf.h 00:29:50.693 CXX test/cpp_headers/accel.o 00:29:50.693 LINK event_perf 00:29:50.693 LINK bdev_svc 00:29:50.952 LINK mkfs 00:29:50.952 LINK spdk_trace 00:29:50.952 LINK mem_callbacks 00:29:50.952 CXX test/cpp_headers/accel_module.o 00:29:50.952 LINK test_dma 00:29:50.952 CC test/event/reactor/reactor.o 00:29:50.952 LINK accel_perf 00:29:51.210 LINK bdevio 00:29:51.210 LINK dif 00:29:51.210 CC test/env/vtophys/vtophys.o 00:29:51.210 CC app/trace_record/trace_record.o 00:29:51.210 CXX test/cpp_headers/assert.o 00:29:51.210 CC test/app/histogram_perf/histogram_perf.o 00:29:51.210 LINK reactor 00:29:51.210 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:29:51.210 LINK vtophys 00:29:51.210 CC test/app/jsoncat/jsoncat.o 00:29:51.210 LINK histogram_perf 00:29:51.469 CXX test/cpp_headers/barrier.o 00:29:51.469 CC examples/bdev/hello_world/hello_bdev.o 00:29:51.469 LINK jsoncat 00:29:51.469 CC examples/bdev/bdevperf/bdevperf.o 00:29:51.469 CC test/event/reactor_perf/reactor_perf.o 00:29:51.469 LINK spdk_trace_record 00:29:51.469 CC examples/blob/hello_world/hello_blob.o 00:29:51.469 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:29:51.469 CXX test/cpp_headers/base64.o 00:29:51.728 CC test/event/app_repeat/app_repeat.o 00:29:51.728 LINK nvme_fuzz 00:29:51.728 LINK reactor_perf 00:29:51.728 LINK hello_bdev 00:29:51.728 LINK env_dpdk_post_init 00:29:51.728 LINK app_repeat 00:29:51.728 LINK hello_blob 00:29:51.728 CXX test/cpp_headers/bdev.o 00:29:52.002 CC app/nvmf_tgt/nvmf_main.o 00:29:52.002 CC test/lvol/esnap/esnap.o 00:29:52.002 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:29:52.002 CXX test/cpp_headers/bdev_module.o 00:29:52.002 CC test/env/memory/memory_ut.o 00:29:52.002 CC test/env/pci/pci_ut.o 00:29:52.002 CC test/nvme/aer/aer.o 00:29:52.002 LINK nvmf_tgt 00:29:52.002 CC test/event/scheduler/scheduler.o 00:29:52.281 CXX test/cpp_headers/bdev_zone.o 00:29:52.281 LINK bdevperf 00:29:52.281 CC examples/blob/cli/blobcli.o 00:29:52.281 LINK aer 00:29:52.281 LINK pci_ut 00:29:52.281 CXX test/cpp_headers/bit_array.o 00:29:52.539 CC app/iscsi_tgt/iscsi_tgt.o 00:29:52.539 LINK scheduler 00:29:52.539 CXX test/cpp_headers/bit_pool.o 00:29:52.539 CC test/nvme/reset/reset.o 00:29:52.539 CC examples/ioat/perf/perf.o 00:29:52.539 LINK blobcli 00:29:52.798 LINK iscsi_tgt 00:29:52.798 CXX test/cpp_headers/blob_bdev.o 00:29:52.798 LINK memory_ut 00:29:52.798 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:29:52.798 LINK ioat_perf 00:29:52.798 CC examples/nvme/hello_world/hello_world.o 00:29:52.798 LINK reset 00:29:52.798 CXX test/cpp_headers/blobfs_bdev.o 00:29:52.798 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:29:53.057 CXX test/cpp_headers/blobfs.o 00:29:53.057 CC test/rpc_client/rpc_client_test.o 00:29:53.057 CC app/spdk_tgt/spdk_tgt.o 00:29:53.057 CC examples/ioat/verify/verify.o 00:29:53.057 LINK hello_world 00:29:53.057 CC test/nvme/sgl/sgl.o 00:29:53.057 CXX test/cpp_headers/blob.o 00:29:53.316 CC examples/sock/hello_world/hello_sock.o 00:29:53.316 LINK rpc_client_test 00:29:53.316 LINK spdk_tgt 00:29:53.316 LINK verify 00:29:53.316 LINK vhost_fuzz 00:29:53.316 CXX test/cpp_headers/conf.o 00:29:53.316 CC examples/nvme/reconnect/reconnect.o 00:29:53.316 LINK sgl 00:29:53.316 LINK iscsi_fuzz 00:29:53.574 CXX test/cpp_headers/config.o 00:29:53.574 LINK hello_sock 00:29:53.574 CXX test/cpp_headers/cpuset.o 00:29:53.574 CC examples/nvme/nvme_manage/nvme_manage.o 00:29:53.574 CC app/spdk_lspci/spdk_lspci.o 00:29:53.574 CXX test/cpp_headers/crc16.o 00:29:53.574 CC test/thread/poller_perf/poller_perf.o 00:29:53.574 CC examples/vmd/lsvmd/lsvmd.o 00:29:53.574 CC test/nvme/e2edp/nvme_dp.o 00:29:53.832 LINK reconnect 00:29:53.832 LINK spdk_lspci 00:29:53.832 CC app/spdk_nvme_perf/perf.o 00:29:53.832 LINK poller_perf 00:29:53.832 LINK lsvmd 00:29:53.832 CC test/app/stub/stub.o 00:29:53.832 CXX test/cpp_headers/crc32.o 00:29:53.832 CXX test/cpp_headers/crc64.o 00:29:53.832 LINK nvme_dp 00:29:53.832 CXX test/cpp_headers/dif.o 00:29:54.091 LINK nvme_manage 00:29:54.091 LINK stub 00:29:54.091 CC examples/nvme/arbitration/arbitration.o 00:29:54.091 CC examples/nvme/hotplug/hotplug.o 00:29:54.091 CXX test/cpp_headers/dma.o 00:29:54.091 CC examples/vmd/led/led.o 00:29:54.091 CC examples/nvme/cmb_copy/cmb_copy.o 00:29:54.091 CC test/nvme/overhead/overhead.o 00:29:54.350 CXX test/cpp_headers/endian.o 00:29:54.350 CC test/nvme/err_injection/err_injection.o 00:29:54.350 LINK led 00:29:54.350 LINK cmb_copy 00:29:54.350 LINK hotplug 00:29:54.350 CXX test/cpp_headers/env_dpdk.o 00:29:54.350 CXX test/cpp_headers/env.o 00:29:54.350 LINK arbitration 00:29:54.627 CC examples/nvmf/nvmf/nvmf.o 00:29:54.627 LINK err_injection 00:29:54.627 CXX test/cpp_headers/event.o 00:29:54.627 CXX test/cpp_headers/fd_group.o 00:29:54.627 LINK spdk_nvme_perf 00:29:54.627 LINK overhead 00:29:54.627 CC examples/nvme/abort/abort.o 00:29:54.627 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:29:54.627 CC app/spdk_nvme_identify/identify.o 00:29:54.886 CC test/nvme/startup/startup.o 00:29:54.886 CXX test/cpp_headers/fd.o 00:29:54.886 LINK nvmf 00:29:54.886 CC app/spdk_nvme_discover/discovery_aer.o 00:29:54.886 CXX test/cpp_headers/file.o 00:29:54.886 LINK pmr_persistence 00:29:54.886 LINK startup 00:29:54.886 CXX test/cpp_headers/ftl.o 00:29:54.886 LINK spdk_nvme_discover 00:29:54.886 CC test/nvme/reserve/reserve.o 00:29:54.886 CXX test/cpp_headers/gpt_spec.o 00:29:55.145 LINK abort 00:29:55.145 CC examples/util/zipf/zipf.o 00:29:55.145 CXX test/cpp_headers/hexlify.o 00:29:55.145 LINK reserve 00:29:55.404 LINK zipf 00:29:55.404 CC examples/interrupt_tgt/interrupt_tgt.o 00:29:55.404 CC examples/thread/thread/thread_ex.o 00:29:55.404 CC examples/idxd/perf/perf.o 00:29:55.404 CXX test/cpp_headers/histogram_data.o 00:29:55.404 CC app/spdk_top/spdk_top.o 00:29:55.404 CC test/nvme/simple_copy/simple_copy.o 00:29:55.404 LINK interrupt_tgt 00:29:55.404 LINK spdk_nvme_identify 00:29:55.404 CXX test/cpp_headers/idxd.o 00:29:55.662 CXX test/cpp_headers/idxd_spec.o 00:29:55.662 LINK idxd_perf 00:29:55.662 LINK thread 00:29:55.662 CXX test/cpp_headers/init.o 00:29:55.662 LINK simple_copy 00:29:55.662 CC test/nvme/connect_stress/connect_stress.o 00:29:55.662 CC test/nvme/boot_partition/boot_partition.o 00:29:55.921 CC test/nvme/compliance/nvme_compliance.o 00:29:55.921 CXX test/cpp_headers/ioat.o 00:29:55.921 CXX test/cpp_headers/ioat_spec.o 00:29:55.921 LINK boot_partition 00:29:55.921 LINK connect_stress 00:29:55.921 CC test/nvme/fused_ordering/fused_ordering.o 00:29:55.921 CC test/nvme/doorbell_aers/doorbell_aers.o 00:29:56.180 LINK nvme_compliance 00:29:56.180 CXX test/cpp_headers/iscsi_spec.o 00:29:56.180 CC app/vhost/vhost.o 00:29:56.180 LINK spdk_top 00:29:56.180 LINK doorbell_aers 00:29:56.180 LINK fused_ordering 00:29:56.180 CC test/nvme/fdp/fdp.o 00:29:56.439 CXX test/cpp_headers/json.o 00:29:56.439 CC app/spdk_dd/spdk_dd.o 00:29:56.439 CXX test/cpp_headers/jsonrpc.o 00:29:56.439 CXX test/cpp_headers/keyring.o 00:29:56.439 CC test/nvme/cuse/cuse.o 00:29:56.439 CXX test/cpp_headers/keyring_module.o 00:29:56.439 LINK vhost 00:29:56.698 CXX test/cpp_headers/likely.o 00:29:56.698 CXX test/cpp_headers/log.o 00:29:56.698 CXX test/cpp_headers/lvol.o 00:29:56.698 LINK fdp 00:29:56.698 LINK esnap 00:29:56.698 CXX test/cpp_headers/memory.o 00:29:56.698 CXX test/cpp_headers/mmio.o 00:29:56.957 LINK spdk_dd 00:29:56.957 CXX test/cpp_headers/nbd.o 00:29:56.957 CC app/fio/nvme/fio_plugin.o 00:29:56.957 CXX test/cpp_headers/notify.o 00:29:56.957 CXX test/cpp_headers/nvme.o 00:29:56.957 CC app/fio/bdev/fio_plugin.o 00:29:56.957 CXX test/cpp_headers/nvme_intel.o 00:29:56.957 CXX test/cpp_headers/nvme_ocssd.o 00:29:56.957 CXX test/cpp_headers/nvme_ocssd_spec.o 00:29:57.217 CXX test/cpp_headers/nvme_spec.o 00:29:57.217 CXX test/cpp_headers/nvme_zns.o 00:29:57.217 CXX test/cpp_headers/nvmf_cmd.o 00:29:57.217 CXX test/cpp_headers/nvmf_fc_spec.o 00:29:57.217 CXX test/cpp_headers/nvmf.o 00:29:57.217 CXX test/cpp_headers/nvmf_spec.o 00:29:57.217 CXX test/cpp_headers/nvmf_transport.o 00:29:57.483 CXX test/cpp_headers/opal.o 00:29:57.483 CXX test/cpp_headers/opal_spec.o 00:29:57.483 CXX test/cpp_headers/pci_ids.o 00:29:57.483 LINK spdk_bdev 00:29:57.483 CXX test/cpp_headers/pipe.o 00:29:57.483 CXX test/cpp_headers/queue.o 00:29:57.483 LINK spdk_nvme 00:29:57.483 CXX test/cpp_headers/reduce.o 00:29:57.483 CXX test/cpp_headers/rpc.o 00:29:57.483 CXX test/cpp_headers/scheduler.o 00:29:57.483 CXX test/cpp_headers/scsi.o 00:29:57.768 CXX test/cpp_headers/scsi_spec.o 00:29:57.768 CXX test/cpp_headers/sock.o 00:29:57.768 CXX test/cpp_headers/stdinc.o 00:29:57.768 CXX test/cpp_headers/string.o 00:29:57.768 CXX test/cpp_headers/thread.o 00:29:57.768 CXX test/cpp_headers/trace.o 00:29:57.768 CXX test/cpp_headers/trace_parser.o 00:29:57.768 CXX test/cpp_headers/tree.o 00:29:57.768 LINK cuse 00:29:57.768 CXX test/cpp_headers/ublk.o 00:29:57.768 CXX test/cpp_headers/util.o 00:29:57.768 CXX test/cpp_headers/uuid.o 00:29:57.768 CXX test/cpp_headers/version.o 00:29:58.027 CXX test/cpp_headers/vfio_user_pci.o 00:29:58.027 CXX test/cpp_headers/vfio_user_spec.o 00:29:58.027 CXX test/cpp_headers/vhost.o 00:29:58.027 CXX test/cpp_headers/vmd.o 00:29:58.027 CXX test/cpp_headers/xor.o 00:29:58.027 CXX test/cpp_headers/zipf.o 00:30:02.242 ************************************ 00:30:02.242 END TEST make 00:30:02.242 ************************************ 00:30:02.242 00:30:02.242 real 0m58.730s 00:30:02.242 user 5m14.333s 00:30:02.242 sys 1m10.328s 00:30:02.242 14:43:21 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:30:02.242 14:43:21 make -- common/autotest_common.sh@10 -- $ set +x 00:30:02.242 14:43:21 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:30:02.242 14:43:21 -- pm/common@29 -- $ signal_monitor_resources TERM 00:30:02.242 14:43:21 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:30:02.242 14:43:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:02.242 14:43:21 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:30:02.242 14:43:21 -- pm/common@44 -- $ pid=6106 00:30:02.242 14:43:21 -- pm/common@50 -- $ kill -TERM 6106 00:30:02.242 14:43:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:02.242 14:43:21 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:30:02.242 14:43:21 -- pm/common@44 -- $ pid=6108 00:30:02.242 14:43:21 -- pm/common@50 -- $ kill -TERM 6108 00:30:02.242 14:43:21 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:02.242 14:43:21 -- nvmf/common.sh@7 -- # uname -s 00:30:02.242 14:43:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:02.242 14:43:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:02.242 14:43:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:02.242 14:43:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:02.242 14:43:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:02.242 14:43:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:02.242 14:43:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:02.242 14:43:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:02.242 14:43:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:02.242 14:43:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:02.242 14:43:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:30:02.242 14:43:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:30:02.242 14:43:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:02.242 14:43:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:02.242 14:43:21 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:02.242 14:43:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:02.242 14:43:21 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:02.242 14:43:21 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:02.242 14:43:21 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:02.242 14:43:21 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:02.242 14:43:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.242 14:43:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.242 14:43:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.242 14:43:21 -- paths/export.sh@5 -- # export PATH 00:30:02.242 14:43:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.242 14:43:21 -- nvmf/common.sh@47 -- # : 0 00:30:02.242 14:43:21 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:02.242 14:43:21 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:02.242 14:43:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:02.242 14:43:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:02.242 14:43:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:02.242 14:43:21 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:02.242 14:43:21 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:02.242 14:43:21 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:02.242 14:43:21 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:30:02.242 14:43:21 -- spdk/autotest.sh@32 -- # uname -s 00:30:02.242 14:43:21 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:30:02.242 14:43:21 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:30:02.242 14:43:21 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:30:02.242 14:43:21 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:30:02.242 14:43:21 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:30:02.242 14:43:21 -- spdk/autotest.sh@44 -- # modprobe nbd 00:30:02.242 14:43:21 -- spdk/autotest.sh@46 -- # type -P udevadm 00:30:02.242 14:43:21 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:30:02.242 14:43:21 -- spdk/autotest.sh@48 -- # udevadm_pid=66747 00:30:02.242 14:43:21 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:30:02.242 14:43:21 -- pm/common@17 -- # local monitor 00:30:02.242 14:43:21 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:30:02.242 14:43:21 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:30:02.242 14:43:21 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:30:02.242 14:43:21 -- pm/common@21 -- # date +%s 00:30:02.242 14:43:21 -- pm/common@25 -- # sleep 1 00:30:02.242 14:43:21 -- pm/common@21 -- # date +%s 00:30:02.242 14:43:21 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721659401 00:30:02.242 14:43:21 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721659401 00:30:02.502 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721659401_collect-cpu-load.pm.log 00:30:02.503 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721659401_collect-vmstat.pm.log 00:30:03.435 14:43:22 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:30:03.435 14:43:22 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:30:03.435 14:43:22 -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:03.435 14:43:22 -- common/autotest_common.sh@10 -- # set +x 00:30:03.435 14:43:22 -- spdk/autotest.sh@59 -- # create_test_list 00:30:03.435 14:43:22 -- common/autotest_common.sh@744 -- # xtrace_disable 00:30:03.435 14:43:22 -- common/autotest_common.sh@10 -- # set +x 00:30:03.435 14:43:22 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:30:03.435 14:43:22 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:30:03.435 14:43:22 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:30:03.435 14:43:22 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:30:03.435 14:43:22 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:30:03.435 14:43:22 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:30:03.435 14:43:22 -- common/autotest_common.sh@1451 -- # uname 00:30:03.435 14:43:22 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:30:03.435 14:43:22 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:30:03.436 14:43:22 -- common/autotest_common.sh@1471 -- # uname 00:30:03.436 14:43:22 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:30:03.436 14:43:22 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:30:03.436 14:43:22 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:30:03.436 14:43:22 -- spdk/autotest.sh@72 -- # hash lcov 00:30:03.436 14:43:22 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:30:03.436 14:43:22 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:30:03.436 --rc lcov_branch_coverage=1 00:30:03.436 --rc lcov_function_coverage=1 00:30:03.436 --rc genhtml_branch_coverage=1 00:30:03.436 --rc genhtml_function_coverage=1 00:30:03.436 --rc genhtml_legend=1 00:30:03.436 --rc geninfo_all_blocks=1 00:30:03.436 ' 00:30:03.436 14:43:22 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:30:03.436 --rc lcov_branch_coverage=1 00:30:03.436 --rc lcov_function_coverage=1 00:30:03.436 --rc genhtml_branch_coverage=1 00:30:03.436 --rc genhtml_function_coverage=1 00:30:03.436 --rc genhtml_legend=1 00:30:03.436 --rc geninfo_all_blocks=1 00:30:03.436 ' 00:30:03.436 14:43:22 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:30:03.436 --rc lcov_branch_coverage=1 00:30:03.436 --rc lcov_function_coverage=1 00:30:03.436 --rc genhtml_branch_coverage=1 00:30:03.436 --rc genhtml_function_coverage=1 00:30:03.436 --rc genhtml_legend=1 00:30:03.436 --rc geninfo_all_blocks=1 00:30:03.436 --no-external' 00:30:03.436 14:43:22 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:30:03.436 --rc lcov_branch_coverage=1 00:30:03.436 --rc lcov_function_coverage=1 00:30:03.436 --rc genhtml_branch_coverage=1 00:30:03.436 --rc genhtml_function_coverage=1 00:30:03.436 --rc genhtml_legend=1 00:30:03.436 --rc geninfo_all_blocks=1 00:30:03.436 --no-external' 00:30:03.436 14:43:22 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:30:03.436 lcov: LCOV version 1.14 00:30:03.436 14:43:22 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:30:18.341 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:30:18.341 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:30:30.559 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:30:30.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:30:30.559 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:30:30.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:30:30.559 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:30:30.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:30:30.559 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:30:30.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:30:30.559 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:30:30.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:30:30.559 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:30:30.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:30:30.559 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:30:30.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:30:30.559 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:30:30.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:30:30.559 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:30:30.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:30:30.559 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:30:30.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:30:30.559 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:30:30.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:30:30.559 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:30:30.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:30:30.559 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:30:30.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:30:30.559 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:30:30.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:30:30.559 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:30:30.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:30:30.559 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:30:30.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:30:30.559 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:30:30.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:30:30.559 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:30:30.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:30:30.559 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:30:30.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:30:30.559 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:30:30.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:30:30.559 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:30:30.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:30:30.559 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:30:30.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:30:30.559 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:30:30.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:30:30.559 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:30:30.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:30:30.559 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:30:30.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:30:30.559 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:30:30.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:30:30.559 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:30:30.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:30:30.559 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:30:30.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:30:30.559 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:30:30.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:30:30.559 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:30:30.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:30:30.559 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:30:30.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:30:30.559 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:30:30.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:30:30.559 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:30:30.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:30:30.559 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:30:30.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:30:30.559 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:30:30.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:30:30.560 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:30:30.560 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:30:30.560 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:30:30.560 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:30:30.560 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:30:30.560 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:30:30.560 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:30:30.560 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:30:30.560 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:30:30.560 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:30:30.560 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:30:30.560 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:30:30.560 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:30:30.560 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:30:30.560 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:30:30.560 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:30:30.560 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:30:30.560 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:30:30.560 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:30:30.560 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:30:30.560 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:30:30.560 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:30:30.560 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:30:30.560 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:30:30.560 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:30:30.560 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:30:30.560 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:30:30.560 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:30:30.560 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:30:30.560 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:30:30.560 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:30:30.560 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:30:30.560 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:30:30.560 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:30:30.560 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:30:30.560 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:30:30.560 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:30:30.560 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:30:30.560 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:30:30.560 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:30:30.560 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:30:30.560 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:30:30.560 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:30:30.560 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:30:30.560 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:30:30.560 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:30:30.560 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:30:30.560 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:30:30.560 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:30:30.560 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:30:30.560 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:30:30.560 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:30:30.560 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:30:30.560 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:30:30.560 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:30:30.560 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:30:30.560 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:30:30.560 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:30:30.560 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:30:30.560 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:30:30.560 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:30:30.560 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:30:30.560 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:30:30.560 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:30:30.560 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:30:30.560 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:30:30.560 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:30:30.560 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:30:30.560 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:30:30.560 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:30:30.560 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:30:30.560 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:30:30.560 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:30:30.560 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:30:30.560 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:30:30.560 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:30:30.560 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:30:30.560 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:30:30.560 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:30:30.560 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:30:30.560 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:30:30.560 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:30:30.560 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:30:30.561 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:30:30.561 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:30:30.561 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:30:30.561 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:30:30.561 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:30:30.561 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:30:30.561 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:30:30.561 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:30:30.561 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:30:30.561 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:30:30.561 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:30:30.561 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:30:30.561 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:30:30.561 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:30:30.561 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:30:30.561 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:30:30.561 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:30:30.561 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:30:30.561 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:30:30.561 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:30:30.561 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:30:30.561 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:30:30.561 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:30:33.849 14:43:53 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:30:33.849 14:43:53 -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:33.849 14:43:53 -- common/autotest_common.sh@10 -- # set +x 00:30:33.849 14:43:53 -- spdk/autotest.sh@91 -- # rm -f 00:30:33.849 14:43:53 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:34.482 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:34.482 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:30:34.482 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:30:34.482 14:43:54 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:30:34.482 14:43:54 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:30:34.482 14:43:54 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:30:34.482 14:43:54 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:30:34.482 14:43:54 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:30:34.482 14:43:54 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:30:34.482 14:43:54 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:30:34.482 14:43:54 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:34.483 14:43:54 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:30:34.483 14:43:54 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:30:34.483 14:43:54 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:30:34.483 14:43:54 -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:30:34.483 14:43:54 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:30:34.483 14:43:54 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:30:34.483 14:43:54 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:30:34.483 14:43:54 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n2 00:30:34.483 14:43:54 -- common/autotest_common.sh@1658 -- # local device=nvme1n2 00:30:34.483 14:43:54 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:30:34.483 14:43:54 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:30:34.483 14:43:54 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:30:34.483 14:43:54 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n3 00:30:34.483 14:43:54 -- common/autotest_common.sh@1658 -- # local device=nvme1n3 00:30:34.483 14:43:54 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:30:34.483 14:43:54 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:30:34.483 14:43:54 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:30:34.483 14:43:54 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:30:34.483 14:43:54 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:30:34.483 14:43:54 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:30:34.483 14:43:54 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:30:34.483 14:43:54 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:30:34.741 No valid GPT data, bailing 00:30:34.741 14:43:54 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:34.741 14:43:54 -- scripts/common.sh@391 -- # pt= 00:30:34.741 14:43:54 -- scripts/common.sh@392 -- # return 1 00:30:34.741 14:43:54 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:30:34.741 1+0 records in 00:30:34.741 1+0 records out 00:30:34.741 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00609042 s, 172 MB/s 00:30:34.742 14:43:54 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:30:34.742 14:43:54 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:30:34.742 14:43:54 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:30:34.742 14:43:54 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:30:34.742 14:43:54 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:30:34.742 No valid GPT data, bailing 00:30:34.742 14:43:54 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:30:34.742 14:43:54 -- scripts/common.sh@391 -- # pt= 00:30:34.742 14:43:54 -- scripts/common.sh@392 -- # return 1 00:30:34.742 14:43:54 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:30:34.742 1+0 records in 00:30:34.742 1+0 records out 00:30:34.742 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00540655 s, 194 MB/s 00:30:34.742 14:43:54 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:30:34.742 14:43:54 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:30:34.742 14:43:54 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:30:34.742 14:43:54 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:30:34.742 14:43:54 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:30:34.742 No valid GPT data, bailing 00:30:34.742 14:43:54 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:30:34.742 14:43:54 -- scripts/common.sh@391 -- # pt= 00:30:34.742 14:43:54 -- scripts/common.sh@392 -- # return 1 00:30:34.742 14:43:54 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:30:34.742 1+0 records in 00:30:34.742 1+0 records out 00:30:34.742 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00639341 s, 164 MB/s 00:30:34.742 14:43:54 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:30:34.742 14:43:54 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:30:34.742 14:43:54 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:30:34.742 14:43:54 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:30:34.742 14:43:54 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:30:35.001 No valid GPT data, bailing 00:30:35.001 14:43:54 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:30:35.001 14:43:54 -- scripts/common.sh@391 -- # pt= 00:30:35.001 14:43:54 -- scripts/common.sh@392 -- # return 1 00:30:35.001 14:43:54 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:30:35.001 1+0 records in 00:30:35.001 1+0 records out 00:30:35.001 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00423364 s, 248 MB/s 00:30:35.001 14:43:54 -- spdk/autotest.sh@118 -- # sync 00:30:35.001 14:43:54 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:30:35.001 14:43:54 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:30:35.001 14:43:54 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:30:37.533 14:43:56 -- spdk/autotest.sh@124 -- # uname -s 00:30:37.533 14:43:56 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:30:37.533 14:43:56 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:30:37.533 14:43:56 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:37.533 14:43:56 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:37.533 14:43:56 -- common/autotest_common.sh@10 -- # set +x 00:30:37.533 ************************************ 00:30:37.533 START TEST setup.sh 00:30:37.533 ************************************ 00:30:37.533 14:43:56 setup.sh -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:30:37.533 * Looking for test storage... 00:30:37.533 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:30:37.533 14:43:56 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:30:37.533 14:43:56 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:30:37.533 14:43:56 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:30:37.533 14:43:56 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:37.533 14:43:56 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:37.533 14:43:56 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:30:37.533 ************************************ 00:30:37.533 START TEST acl 00:30:37.533 ************************************ 00:30:37.533 14:43:56 setup.sh.acl -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:30:37.533 * Looking for test storage... 00:30:37.533 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:30:37.533 14:43:56 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:30:37.533 14:43:56 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:30:37.533 14:43:56 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:30:37.533 14:43:56 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:30:37.533 14:43:56 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:30:37.533 14:43:56 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:30:37.533 14:43:56 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:30:37.533 14:43:56 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:37.533 14:43:56 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:30:37.533 14:43:56 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:30:37.533 14:43:56 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:30:37.533 14:43:56 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:30:37.533 14:43:56 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:30:37.533 14:43:56 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:30:37.533 14:43:56 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:30:37.533 14:43:56 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n2 00:30:37.533 14:43:56 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme1n2 00:30:37.533 14:43:56 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:30:37.533 14:43:56 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:30:37.533 14:43:56 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:30:37.533 14:43:56 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n3 00:30:37.533 14:43:56 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme1n3 00:30:37.533 14:43:56 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:30:37.533 14:43:56 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:30:37.533 14:43:56 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:30:37.533 14:43:56 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:30:37.533 14:43:56 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:30:37.533 14:43:56 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:30:37.533 14:43:56 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:30:37.533 14:43:56 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:30:37.533 14:43:56 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:38.462 14:43:57 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:30:38.462 14:43:57 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:30:38.462 14:43:57 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:30:38.462 14:43:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:30:38.462 14:43:57 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:30:38.462 14:43:57 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:30:39.028 14:43:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:30:39.028 14:43:58 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:30:39.028 14:43:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:30:39.028 Hugepages 00:30:39.028 node hugesize free / total 00:30:39.028 14:43:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:30:39.028 14:43:58 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:30:39.028 14:43:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:30:39.028 00:30:39.028 Type BDF Vendor Device NUMA Driver Device Block devices 00:30:39.028 14:43:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:30:39.028 14:43:58 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:30:39.028 14:43:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:30:39.028 14:43:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:30:39.028 14:43:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:30:39.028 14:43:58 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:30:39.028 14:43:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:30:39.028 14:43:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:30:39.028 14:43:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:30:39.028 14:43:58 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:30:39.028 14:43:58 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:30:39.028 14:43:58 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:30:39.028 14:43:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:30:39.286 14:43:58 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:30:39.286 14:43:58 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:30:39.286 14:43:58 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:30:39.286 14:43:58 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:30:39.286 14:43:58 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:30:39.286 14:43:58 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:30:39.286 14:43:58 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:30:39.286 14:43:58 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:30:39.286 14:43:58 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:39.286 14:43:58 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:39.286 14:43:58 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:30:39.286 ************************************ 00:30:39.286 START TEST denied 00:30:39.286 ************************************ 00:30:39.286 14:43:58 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:30:39.286 14:43:58 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:30:39.286 14:43:58 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:30:39.286 14:43:58 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:30:39.286 14:43:58 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:30:39.286 14:43:58 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:30:40.220 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:30:40.220 14:43:59 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:30:40.220 14:43:59 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:30:40.220 14:43:59 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:30:40.220 14:43:59 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:30:40.220 14:43:59 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:30:40.220 14:43:59 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:30:40.220 14:43:59 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:30:40.220 14:43:59 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:30:40.220 14:43:59 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:30:40.220 14:43:59 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:40.786 00:30:40.786 real 0m1.615s 00:30:40.786 user 0m0.608s 00:30:40.786 sys 0m0.992s 00:30:40.786 14:44:00 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:40.786 14:44:00 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:30:40.786 ************************************ 00:30:40.786 END TEST denied 00:30:40.786 ************************************ 00:30:40.786 14:44:00 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:30:40.786 14:44:00 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:40.786 14:44:00 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:40.786 14:44:00 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:30:40.786 ************************************ 00:30:40.786 START TEST allowed 00:30:40.786 ************************************ 00:30:40.786 14:44:00 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:30:40.786 14:44:00 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:30:40.786 14:44:00 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:30:40.786 14:44:00 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:30:40.786 14:44:00 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:30:40.786 14:44:00 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:30:41.751 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:30:41.751 14:44:01 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:30:41.751 14:44:01 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:30:41.751 14:44:01 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:30:41.751 14:44:01 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:30:41.751 14:44:01 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:30:41.751 14:44:01 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:30:41.751 14:44:01 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:30:41.751 14:44:01 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:30:41.751 14:44:01 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:30:41.751 14:44:01 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:42.685 00:30:42.685 real 0m1.647s 00:30:42.685 user 0m0.720s 00:30:42.685 sys 0m0.943s 00:30:42.685 14:44:02 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:42.685 14:44:02 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:30:42.685 ************************************ 00:30:42.685 END TEST allowed 00:30:42.685 ************************************ 00:30:42.685 00:30:42.685 real 0m5.204s 00:30:42.685 user 0m2.198s 00:30:42.685 sys 0m3.030s 00:30:42.685 14:44:02 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:42.685 14:44:02 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:30:42.685 ************************************ 00:30:42.685 END TEST acl 00:30:42.685 ************************************ 00:30:42.685 14:44:02 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:30:42.685 14:44:02 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:42.685 14:44:02 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:42.685 14:44:02 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:30:42.685 ************************************ 00:30:42.685 START TEST hugepages 00:30:42.685 ************************************ 00:30:42.685 14:44:02 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:30:42.685 * Looking for test storage... 00:30:42.685 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 4734132 kB' 'MemAvailable: 7381540 kB' 'Buffers: 2436 kB' 'Cached: 2849420 kB' 'SwapCached: 0 kB' 'Active: 476816 kB' 'Inactive: 2479148 kB' 'Active(anon): 114608 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2479148 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 156 kB' 'Writeback: 0 kB' 'AnonPages: 105988 kB' 'Mapped: 48796 kB' 'Shmem: 10492 kB' 'KReclaimable: 85984 kB' 'Slab: 168004 kB' 'SReclaimable: 85984 kB' 'SUnreclaim: 82020 kB' 'KernelStack: 6540 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 336096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55096 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:30:42.685 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:30:42.686 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:30:42.687 14:44:02 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:30:42.687 14:44:02 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:42.687 14:44:02 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:42.687 14:44:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:30:42.945 ************************************ 00:30:42.945 START TEST default_setup 00:30:42.945 ************************************ 00:30:42.945 14:44:02 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:30:42.945 14:44:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:30:42.945 14:44:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:30:42.945 14:44:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:30:42.945 14:44:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:30:42.946 14:44:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:30:42.946 14:44:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:30:42.946 14:44:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:30:42.946 14:44:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:30:42.946 14:44:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:30:42.946 14:44:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:30:42.946 14:44:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:30:42.946 14:44:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:30:42.946 14:44:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:30:42.946 14:44:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:30:42.946 14:44:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:30:42.946 14:44:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:30:42.946 14:44:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:30:42.946 14:44:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:30:42.946 14:44:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:30:42.946 14:44:02 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:30:42.946 14:44:02 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:30:42.946 14:44:02 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:43.516 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:43.516 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:30:43.782 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6814176 kB' 'MemAvailable: 9461452 kB' 'Buffers: 2436 kB' 'Cached: 2849412 kB' 'SwapCached: 0 kB' 'Active: 493504 kB' 'Inactive: 2479164 kB' 'Active(anon): 131296 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2479164 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122372 kB' 'Mapped: 48924 kB' 'Shmem: 10468 kB' 'KReclaimable: 85684 kB' 'Slab: 167696 kB' 'SReclaimable: 85684 kB' 'SUnreclaim: 82012 kB' 'KernelStack: 6464 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55128 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.782 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:30:43.783 14:44:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6814176 kB' 'MemAvailable: 9461452 kB' 'Buffers: 2436 kB' 'Cached: 2849412 kB' 'SwapCached: 0 kB' 'Active: 493196 kB' 'Inactive: 2479164 kB' 'Active(anon): 130988 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2479164 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122104 kB' 'Mapped: 48796 kB' 'Shmem: 10468 kB' 'KReclaimable: 85684 kB' 'Slab: 167688 kB' 'SReclaimable: 85684 kB' 'SUnreclaim: 82004 kB' 'KernelStack: 6496 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55112 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.784 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:43.785 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6814176 kB' 'MemAvailable: 9461456 kB' 'Buffers: 2436 kB' 'Cached: 2849412 kB' 'SwapCached: 0 kB' 'Active: 493240 kB' 'Inactive: 2479168 kB' 'Active(anon): 131032 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2479168 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122136 kB' 'Mapped: 48796 kB' 'Shmem: 10468 kB' 'KReclaimable: 85684 kB' 'Slab: 167688 kB' 'SReclaimable: 85684 kB' 'SUnreclaim: 82004 kB' 'KernelStack: 6512 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55128 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.786 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.787 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:30:43.788 nr_hugepages=1024 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:30:43.788 resv_hugepages=0 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:30:43.788 surplus_hugepages=0 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:30:43.788 anon_hugepages=0 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6814176 kB' 'MemAvailable: 9461456 kB' 'Buffers: 2436 kB' 'Cached: 2849412 kB' 'SwapCached: 0 kB' 'Active: 493204 kB' 'Inactive: 2479168 kB' 'Active(anon): 130996 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2479168 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122100 kB' 'Mapped: 48796 kB' 'Shmem: 10468 kB' 'KReclaimable: 85684 kB' 'Slab: 167688 kB' 'SReclaimable: 85684 kB' 'SUnreclaim: 82004 kB' 'KernelStack: 6496 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55128 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.788 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:30:43.789 14:44:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6814176 kB' 'MemUsed: 5427800 kB' 'SwapCached: 0 kB' 'Active: 493504 kB' 'Inactive: 2479168 kB' 'Active(anon): 131296 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2479168 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 2851848 kB' 'Mapped: 48796 kB' 'AnonPages: 122396 kB' 'Shmem: 10468 kB' 'KernelStack: 6512 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85684 kB' 'Slab: 167688 kB' 'SReclaimable: 85684 kB' 'SUnreclaim: 82004 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.790 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:30:43.791 node0=1024 expecting 1024 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:30:43.791 00:30:43.791 real 0m1.060s 00:30:43.791 user 0m0.471s 00:30:43.791 sys 0m0.571s 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:43.791 14:44:03 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:30:43.791 ************************************ 00:30:43.791 END TEST default_setup 00:30:43.791 ************************************ 00:30:44.051 14:44:03 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:30:44.051 14:44:03 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:44.051 14:44:03 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:44.051 14:44:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:30:44.051 ************************************ 00:30:44.051 START TEST per_node_1G_alloc 00:30:44.051 ************************************ 00:30:44.051 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:30:44.051 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:30:44.051 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:30:44.051 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:30:44.051 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:30:44.051 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:30:44.051 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:30:44.051 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:30:44.051 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:30:44.051 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:30:44.051 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:30:44.051 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:30:44.051 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:30:44.051 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:30:44.051 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:30:44.051 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:30:44.051 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:30:44.051 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:30:44.051 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:30:44.051 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:30:44.051 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:30:44.051 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:30:44.051 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:30:44.051 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:30:44.051 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:30:44.051 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:44.313 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:44.313 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:44.313 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7873036 kB' 'MemAvailable: 10520320 kB' 'Buffers: 2436 kB' 'Cached: 2849412 kB' 'SwapCached: 0 kB' 'Active: 493520 kB' 'Inactive: 2479172 kB' 'Active(anon): 131312 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2479172 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122716 kB' 'Mapped: 48924 kB' 'Shmem: 10468 kB' 'KReclaimable: 85684 kB' 'Slab: 167596 kB' 'SReclaimable: 85684 kB' 'SUnreclaim: 81912 kB' 'KernelStack: 6436 kB' 'PageTables: 4112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55176 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.313 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:44.314 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7873036 kB' 'MemAvailable: 10520320 kB' 'Buffers: 2436 kB' 'Cached: 2849412 kB' 'SwapCached: 0 kB' 'Active: 493252 kB' 'Inactive: 2479172 kB' 'Active(anon): 131044 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2479172 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122172 kB' 'Mapped: 48796 kB' 'Shmem: 10468 kB' 'KReclaimable: 85684 kB' 'Slab: 167592 kB' 'SReclaimable: 85684 kB' 'SUnreclaim: 81908 kB' 'KernelStack: 6480 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.315 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:30:44.316 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7873036 kB' 'MemAvailable: 10520320 kB' 'Buffers: 2436 kB' 'Cached: 2849412 kB' 'SwapCached: 0 kB' 'Active: 493512 kB' 'Inactive: 2479172 kB' 'Active(anon): 131304 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2479172 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122432 kB' 'Mapped: 48796 kB' 'Shmem: 10468 kB' 'KReclaimable: 85684 kB' 'Slab: 167592 kB' 'SReclaimable: 85684 kB' 'SUnreclaim: 81908 kB' 'KernelStack: 6480 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.317 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:44.318 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.319 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.319 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.319 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:44.319 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.319 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.319 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.319 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:44.319 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.319 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.319 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.319 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:44.319 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.319 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.319 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.319 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:44.319 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:30:44.319 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:30:44.319 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:30:44.579 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:30:44.579 nr_hugepages=512 00:30:44.579 resv_hugepages=0 00:30:44.579 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:30:44.579 surplus_hugepages=0 00:30:44.579 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:30:44.579 anon_hugepages=0 00:30:44.579 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:30:44.579 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:30:44.579 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:30:44.579 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:30:44.579 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:30:44.579 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:30:44.579 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:30:44.579 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:30:44.579 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:44.579 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:44.579 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:44.579 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:30:44.579 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:44.579 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7872812 kB' 'MemAvailable: 10520096 kB' 'Buffers: 2436 kB' 'Cached: 2849412 kB' 'SwapCached: 0 kB' 'Active: 493116 kB' 'Inactive: 2479172 kB' 'Active(anon): 130908 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2479172 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122288 kB' 'Mapped: 48796 kB' 'Shmem: 10468 kB' 'KReclaimable: 85684 kB' 'Slab: 167592 kB' 'SReclaimable: 85684 kB' 'SUnreclaim: 81908 kB' 'KernelStack: 6480 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:30:44.579 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.579 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.579 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:44.579 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.579 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.579 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.579 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:44.579 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.579 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.579 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.579 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:44.579 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.579 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.579 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.579 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:44.579 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.579 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.579 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.579 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:44.579 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:44.580 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7872812 kB' 'MemUsed: 4369164 kB' 'SwapCached: 0 kB' 'Active: 493112 kB' 'Inactive: 2479172 kB' 'Active(anon): 130904 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2479172 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 2851848 kB' 'Mapped: 48796 kB' 'AnonPages: 122288 kB' 'Shmem: 10468 kB' 'KernelStack: 6480 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85684 kB' 'Slab: 167592 kB' 'SReclaimable: 85684 kB' 'SUnreclaim: 81908 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.581 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.582 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.582 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.582 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.582 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.582 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.582 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.582 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.582 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.582 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.582 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.582 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.582 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.582 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.582 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.582 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.582 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.582 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.582 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.582 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.582 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.582 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.582 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.582 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.582 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.582 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.582 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.582 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.582 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.582 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.582 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.582 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.582 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.582 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.582 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.582 14:44:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.582 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:30:44.583 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:44.583 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:44.583 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:44.583 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:30:44.583 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:30:44.583 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:30:44.583 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:30:44.583 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:30:44.583 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:30:44.583 node0=512 expecting 512 00:30:44.583 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:30:44.583 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:30:44.583 00:30:44.583 real 0m0.581s 00:30:44.583 user 0m0.283s 00:30:44.583 sys 0m0.332s 00:30:44.583 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:44.583 14:44:04 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:30:44.583 ************************************ 00:30:44.583 END TEST per_node_1G_alloc 00:30:44.583 ************************************ 00:30:44.583 14:44:04 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:30:44.583 14:44:04 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:44.583 14:44:04 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:44.583 14:44:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:30:44.583 ************************************ 00:30:44.583 START TEST even_2G_alloc 00:30:44.583 ************************************ 00:30:44.583 14:44:04 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:30:44.583 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:30:44.583 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:30:44.583 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:30:44.583 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:30:44.583 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:30:44.583 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:30:44.583 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:30:44.583 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:30:44.583 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:30:44.583 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:30:44.583 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:30:44.583 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:30:44.583 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:30:44.583 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:30:44.583 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:30:44.583 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:30:44.583 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:30:44.583 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:30:44.583 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:30:44.583 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:30:44.583 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:30:44.583 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:30:44.583 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:30:44.583 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:45.153 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:45.153 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:45.153 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:45.153 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6830600 kB' 'MemAvailable: 9477884 kB' 'Buffers: 2436 kB' 'Cached: 2849412 kB' 'SwapCached: 0 kB' 'Active: 493672 kB' 'Inactive: 2479172 kB' 'Active(anon): 131464 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2479172 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122352 kB' 'Mapped: 48924 kB' 'Shmem: 10468 kB' 'KReclaimable: 85684 kB' 'Slab: 167552 kB' 'SReclaimable: 85684 kB' 'SUnreclaim: 81868 kB' 'KernelStack: 6512 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55128 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.154 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6830600 kB' 'MemAvailable: 9477884 kB' 'Buffers: 2436 kB' 'Cached: 2849412 kB' 'SwapCached: 0 kB' 'Active: 493376 kB' 'Inactive: 2479172 kB' 'Active(anon): 131168 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2479172 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122316 kB' 'Mapped: 48796 kB' 'Shmem: 10468 kB' 'KReclaimable: 85684 kB' 'Slab: 167552 kB' 'SReclaimable: 85684 kB' 'SUnreclaim: 81868 kB' 'KernelStack: 6480 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55112 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.155 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.156 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6830376 kB' 'MemAvailable: 9477660 kB' 'Buffers: 2436 kB' 'Cached: 2849412 kB' 'SwapCached: 0 kB' 'Active: 493120 kB' 'Inactive: 2479172 kB' 'Active(anon): 130912 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2479172 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122316 kB' 'Mapped: 48796 kB' 'Shmem: 10468 kB' 'KReclaimable: 85684 kB' 'Slab: 167548 kB' 'SReclaimable: 85684 kB' 'SUnreclaim: 81864 kB' 'KernelStack: 6480 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55112 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.157 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.158 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:30:45.159 nr_hugepages=1024 00:30:45.159 resv_hugepages=0 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:30:45.159 surplus_hugepages=0 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:30:45.159 anon_hugepages=0 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.159 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6833440 kB' 'MemAvailable: 9480724 kB' 'Buffers: 2436 kB' 'Cached: 2849412 kB' 'SwapCached: 0 kB' 'Active: 493460 kB' 'Inactive: 2479172 kB' 'Active(anon): 131252 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2479172 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122412 kB' 'Mapped: 48796 kB' 'Shmem: 10468 kB' 'KReclaimable: 85684 kB' 'Slab: 167544 kB' 'SReclaimable: 85684 kB' 'SUnreclaim: 81860 kB' 'KernelStack: 6496 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55112 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.160 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6835532 kB' 'MemUsed: 5406444 kB' 'SwapCached: 0 kB' 'Active: 493124 kB' 'Inactive: 2479172 kB' 'Active(anon): 130916 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2479172 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 2851848 kB' 'Mapped: 48796 kB' 'AnonPages: 122316 kB' 'Shmem: 10468 kB' 'KernelStack: 6480 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85684 kB' 'Slab: 167544 kB' 'SReclaimable: 85684 kB' 'SUnreclaim: 81860 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.161 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.162 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.163 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.163 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.163 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.163 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.163 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.163 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.163 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.163 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:30:45.163 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.163 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.163 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.163 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:30:45.163 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:30:45.163 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:30:45.163 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:30:45.163 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:30:45.163 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:30:45.163 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:30:45.163 node0=1024 expecting 1024 00:30:45.163 14:44:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:30:45.163 00:30:45.163 real 0m0.690s 00:30:45.163 user 0m0.324s 00:30:45.163 sys 0m0.406s 00:30:45.163 14:44:04 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:45.163 14:44:04 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:30:45.163 ************************************ 00:30:45.163 END TEST even_2G_alloc 00:30:45.163 ************************************ 00:30:45.420 14:44:04 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:30:45.421 14:44:04 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:45.421 14:44:04 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:45.421 14:44:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:30:45.421 ************************************ 00:30:45.421 START TEST odd_alloc 00:30:45.421 ************************************ 00:30:45.421 14:44:04 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:30:45.421 14:44:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:30:45.421 14:44:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:30:45.421 14:44:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:30:45.421 14:44:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:30:45.421 14:44:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:30:45.421 14:44:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:30:45.421 14:44:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:30:45.421 14:44:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:30:45.421 14:44:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:30:45.421 14:44:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:30:45.421 14:44:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:30:45.421 14:44:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:30:45.421 14:44:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:30:45.421 14:44:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:30:45.421 14:44:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:30:45.421 14:44:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:30:45.421 14:44:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:30:45.421 14:44:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:30:45.421 14:44:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:30:45.421 14:44:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:30:45.421 14:44:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:30:45.421 14:44:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:30:45.421 14:44:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:30:45.421 14:44:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:45.678 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:45.678 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:45.678 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6840280 kB' 'MemAvailable: 9487564 kB' 'Buffers: 2436 kB' 'Cached: 2849412 kB' 'SwapCached: 0 kB' 'Active: 493448 kB' 'Inactive: 2479172 kB' 'Active(anon): 131240 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2479172 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122344 kB' 'Mapped: 48924 kB' 'Shmem: 10468 kB' 'KReclaimable: 85684 kB' 'Slab: 167592 kB' 'SReclaimable: 85684 kB' 'SUnreclaim: 81908 kB' 'KernelStack: 6512 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 353100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.941 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:30:45.942 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6840280 kB' 'MemAvailable: 9487564 kB' 'Buffers: 2436 kB' 'Cached: 2849412 kB' 'SwapCached: 0 kB' 'Active: 493172 kB' 'Inactive: 2479172 kB' 'Active(anon): 130964 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2479172 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122324 kB' 'Mapped: 48796 kB' 'Shmem: 10468 kB' 'KReclaimable: 85684 kB' 'Slab: 167592 kB' 'SReclaimable: 85684 kB' 'SUnreclaim: 81908 kB' 'KernelStack: 6480 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 353100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.943 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.944 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6840280 kB' 'MemAvailable: 9487564 kB' 'Buffers: 2436 kB' 'Cached: 2849412 kB' 'SwapCached: 0 kB' 'Active: 493316 kB' 'Inactive: 2479172 kB' 'Active(anon): 131108 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2479172 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122468 kB' 'Mapped: 48796 kB' 'Shmem: 10468 kB' 'KReclaimable: 85684 kB' 'Slab: 167592 kB' 'SReclaimable: 85684 kB' 'SUnreclaim: 81908 kB' 'KernelStack: 6480 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 353100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.945 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:30:45.946 nr_hugepages=1025 00:30:45.946 resv_hugepages=0 00:30:45.946 surplus_hugepages=0 00:30:45.946 anon_hugepages=0 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:30:45.946 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6840280 kB' 'MemAvailable: 9487564 kB' 'Buffers: 2436 kB' 'Cached: 2849412 kB' 'SwapCached: 0 kB' 'Active: 493248 kB' 'Inactive: 2479172 kB' 'Active(anon): 131040 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2479172 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122404 kB' 'Mapped: 48796 kB' 'Shmem: 10468 kB' 'KReclaimable: 85684 kB' 'Slab: 167584 kB' 'SReclaimable: 85684 kB' 'SUnreclaim: 81900 kB' 'KernelStack: 6496 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 353100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.947 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6840280 kB' 'MemUsed: 5401696 kB' 'SwapCached: 0 kB' 'Active: 493240 kB' 'Inactive: 2479172 kB' 'Active(anon): 131032 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2479172 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 2851848 kB' 'Mapped: 48796 kB' 'AnonPages: 122400 kB' 'Shmem: 10468 kB' 'KernelStack: 6496 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85684 kB' 'Slab: 167576 kB' 'SReclaimable: 85684 kB' 'SUnreclaim: 81892 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.948 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.949 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.950 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.950 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.950 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.950 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.950 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.950 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.950 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.950 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.950 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.950 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.950 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.950 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.950 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.950 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.950 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:30:45.950 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:45.950 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:45.950 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:45.950 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:30:45.950 14:44:05 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:30:45.950 14:44:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:30:45.950 14:44:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:30:45.950 14:44:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:30:45.950 14:44:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:30:45.950 node0=1025 expecting 1025 00:30:45.950 14:44:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:30:45.950 14:44:05 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:30:45.950 00:30:45.950 real 0m0.701s 00:30:45.950 user 0m0.321s 00:30:45.950 sys 0m0.401s 00:30:45.950 14:44:05 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:45.950 14:44:05 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:30:45.950 ************************************ 00:30:45.950 END TEST odd_alloc 00:30:45.950 ************************************ 00:30:46.210 14:44:05 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:30:46.210 14:44:05 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:46.210 14:44:05 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:46.210 14:44:05 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:30:46.210 ************************************ 00:30:46.210 START TEST custom_alloc 00:30:46.210 ************************************ 00:30:46.210 14:44:05 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:30:46.210 14:44:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:30:46.210 14:44:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:30:46.210 14:44:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:30:46.210 14:44:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:30:46.210 14:44:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:30:46.210 14:44:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:30:46.210 14:44:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:30:46.210 14:44:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:30:46.210 14:44:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:30:46.210 14:44:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:30:46.210 14:44:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:30:46.210 14:44:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:30:46.210 14:44:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:30:46.210 14:44:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:30:46.210 14:44:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:30:46.210 14:44:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:30:46.210 14:44:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:30:46.210 14:44:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:30:46.210 14:44:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:30:46.210 14:44:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:30:46.210 14:44:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:30:46.210 14:44:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:30:46.210 14:44:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:30:46.210 14:44:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:30:46.210 14:44:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:30:46.210 14:44:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:30:46.210 14:44:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:30:46.210 14:44:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:30:46.210 14:44:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:30:46.210 14:44:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:30:46.210 14:44:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:30:46.210 14:44:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:30:46.210 14:44:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:30:46.210 14:44:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:30:46.210 14:44:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:30:46.210 14:44:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:30:46.210 14:44:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:30:46.210 14:44:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:30:46.210 14:44:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:30:46.210 14:44:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:30:46.210 14:44:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:30:46.210 14:44:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:30:46.210 14:44:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:30:46.210 14:44:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:30:46.210 14:44:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:46.470 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:46.470 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:46.470 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:46.734 14:44:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:30:46.734 14:44:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:30:46.734 14:44:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:30:46.734 14:44:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:30:46.734 14:44:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:30:46.734 14:44:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:30:46.734 14:44:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:30:46.734 14:44:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:30:46.734 14:44:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:30:46.734 14:44:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:30:46.734 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:30:46.734 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:30:46.734 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:30:46.734 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:30:46.734 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:46.734 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:46.734 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:46.734 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:30:46.734 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7892828 kB' 'MemAvailable: 10540116 kB' 'Buffers: 2436 kB' 'Cached: 2849416 kB' 'SwapCached: 0 kB' 'Active: 493476 kB' 'Inactive: 2479176 kB' 'Active(anon): 131268 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2479176 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122396 kB' 'Mapped: 48920 kB' 'Shmem: 10468 kB' 'KReclaimable: 85684 kB' 'Slab: 167560 kB' 'SReclaimable: 85684 kB' 'SUnreclaim: 81876 kB' 'KernelStack: 6512 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.735 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7892828 kB' 'MemAvailable: 10540116 kB' 'Buffers: 2436 kB' 'Cached: 2849416 kB' 'SwapCached: 0 kB' 'Active: 493152 kB' 'Inactive: 2479176 kB' 'Active(anon): 130944 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2479176 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122340 kB' 'Mapped: 48796 kB' 'Shmem: 10468 kB' 'KReclaimable: 85684 kB' 'Slab: 167564 kB' 'SReclaimable: 85684 kB' 'SUnreclaim: 81880 kB' 'KernelStack: 6480 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55112 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.736 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.737 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7892828 kB' 'MemAvailable: 10540116 kB' 'Buffers: 2436 kB' 'Cached: 2849416 kB' 'SwapCached: 0 kB' 'Active: 493236 kB' 'Inactive: 2479176 kB' 'Active(anon): 131028 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2479176 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122440 kB' 'Mapped: 48796 kB' 'Shmem: 10468 kB' 'KReclaimable: 85684 kB' 'Slab: 167564 kB' 'SReclaimable: 85684 kB' 'SUnreclaim: 81880 kB' 'KernelStack: 6496 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55112 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.738 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.739 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:30:46.740 nr_hugepages=512 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:30:46.740 resv_hugepages=0 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:30:46.740 surplus_hugepages=0 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:30:46.740 anon_hugepages=0 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7892828 kB' 'MemAvailable: 10540116 kB' 'Buffers: 2436 kB' 'Cached: 2849416 kB' 'SwapCached: 0 kB' 'Active: 493160 kB' 'Inactive: 2479176 kB' 'Active(anon): 130952 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2479176 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122340 kB' 'Mapped: 48796 kB' 'Shmem: 10468 kB' 'KReclaimable: 85684 kB' 'Slab: 167564 kB' 'SReclaimable: 85684 kB' 'SUnreclaim: 81880 kB' 'KernelStack: 6480 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55096 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.740 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:46.741 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7892828 kB' 'MemUsed: 4349148 kB' 'SwapCached: 0 kB' 'Active: 493168 kB' 'Inactive: 2479176 kB' 'Active(anon): 130960 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2479176 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'FilePages: 2851852 kB' 'Mapped: 48796 kB' 'AnonPages: 122340 kB' 'Shmem: 10468 kB' 'KernelStack: 6480 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85684 kB' 'Slab: 167564 kB' 'SReclaimable: 85684 kB' 'SUnreclaim: 81880 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.742 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:30:46.743 node0=512 expecting 512 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:30:46.743 00:30:46.743 real 0m0.744s 00:30:46.743 user 0m0.342s 00:30:46.743 sys 0m0.406s 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:46.743 14:44:06 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:30:46.743 ************************************ 00:30:46.743 END TEST custom_alloc 00:30:46.743 ************************************ 00:30:47.002 14:44:06 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:30:47.002 14:44:06 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:47.002 14:44:06 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:47.002 14:44:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:30:47.002 ************************************ 00:30:47.002 START TEST no_shrink_alloc 00:30:47.002 ************************************ 00:30:47.002 14:44:06 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:30:47.002 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:30:47.002 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:30:47.002 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:30:47.002 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:30:47.002 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:30:47.002 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:30:47.002 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:30:47.002 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:30:47.002 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:30:47.002 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:30:47.002 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:30:47.002 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:30:47.002 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:30:47.002 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:30:47.002 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:30:47.002 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:30:47.002 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:30:47.002 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:30:47.002 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:30:47.002 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:30:47.002 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:30:47.002 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:47.260 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:47.260 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:47.260 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6846112 kB' 'MemAvailable: 9493400 kB' 'Buffers: 2436 kB' 'Cached: 2849416 kB' 'SwapCached: 0 kB' 'Active: 493468 kB' 'Inactive: 2479176 kB' 'Active(anon): 131260 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2479176 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 122404 kB' 'Mapped: 48980 kB' 'Shmem: 10468 kB' 'KReclaimable: 85684 kB' 'Slab: 167600 kB' 'SReclaimable: 85684 kB' 'SUnreclaim: 81916 kB' 'KernelStack: 6468 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55096 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.523 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:30:47.524 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6846112 kB' 'MemAvailable: 9493400 kB' 'Buffers: 2436 kB' 'Cached: 2849416 kB' 'SwapCached: 0 kB' 'Active: 493648 kB' 'Inactive: 2479176 kB' 'Active(anon): 131440 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2479176 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 122620 kB' 'Mapped: 48980 kB' 'Shmem: 10468 kB' 'KReclaimable: 85684 kB' 'Slab: 167600 kB' 'SReclaimable: 85684 kB' 'SUnreclaim: 81916 kB' 'KernelStack: 6452 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 353100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55080 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.525 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:30:47.526 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6850016 kB' 'MemAvailable: 9497296 kB' 'Buffers: 2436 kB' 'Cached: 2849416 kB' 'SwapCached: 0 kB' 'Active: 488772 kB' 'Inactive: 2479176 kB' 'Active(anon): 126564 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2479176 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 117684 kB' 'Mapped: 48060 kB' 'Shmem: 10468 kB' 'KReclaimable: 85672 kB' 'Slab: 167480 kB' 'SReclaimable: 85672 kB' 'SUnreclaim: 81808 kB' 'KernelStack: 6368 kB' 'PageTables: 3744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55016 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.527 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:47.528 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.528 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.528 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.528 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:47.528 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.528 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.528 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.528 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:47.528 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.528 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.528 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.528 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:47.528 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.528 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.528 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.528 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:47.528 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.528 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.528 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.528 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:47.528 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.528 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.528 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.528 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:47.528 14:44:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.528 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:30:47.529 nr_hugepages=1024 00:30:47.529 resv_hugepages=0 00:30:47.529 surplus_hugepages=0 00:30:47.529 anon_hugepages=0 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6850328 kB' 'MemAvailable: 9497608 kB' 'Buffers: 2436 kB' 'Cached: 2849416 kB' 'SwapCached: 0 kB' 'Active: 488704 kB' 'Inactive: 2479176 kB' 'Active(anon): 126496 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2479176 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 117596 kB' 'Mapped: 48060 kB' 'Shmem: 10468 kB' 'KReclaimable: 85672 kB' 'Slab: 167452 kB' 'SReclaimable: 85672 kB' 'SUnreclaim: 81780 kB' 'KernelStack: 6368 kB' 'PageTables: 3724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55016 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.529 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:30:47.530 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6850328 kB' 'MemUsed: 5391648 kB' 'SwapCached: 0 kB' 'Active: 488448 kB' 'Inactive: 2479176 kB' 'Active(anon): 126240 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2479176 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'FilePages: 2851852 kB' 'Mapped: 48060 kB' 'AnonPages: 117596 kB' 'Shmem: 10468 kB' 'KernelStack: 6368 kB' 'PageTables: 3724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85672 kB' 'Slab: 167444 kB' 'SReclaimable: 85672 kB' 'SUnreclaim: 81772 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.531 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:30:47.532 node0=1024 expecting 1024 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:30:47.532 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:48.141 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:48.141 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:48.141 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:48.141 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6845080 kB' 'MemAvailable: 9492360 kB' 'Buffers: 2436 kB' 'Cached: 2849416 kB' 'SwapCached: 0 kB' 'Active: 489292 kB' 'Inactive: 2479176 kB' 'Active(anon): 127084 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2479176 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 118212 kB' 'Mapped: 48124 kB' 'Shmem: 10468 kB' 'KReclaimable: 85672 kB' 'Slab: 167368 kB' 'SReclaimable: 85672 kB' 'SUnreclaim: 81696 kB' 'KernelStack: 6400 kB' 'PageTables: 3812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335008 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55032 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:48.141 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:30:48.142 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6845080 kB' 'MemAvailable: 9492360 kB' 'Buffers: 2436 kB' 'Cached: 2849416 kB' 'SwapCached: 0 kB' 'Active: 488788 kB' 'Inactive: 2479176 kB' 'Active(anon): 126580 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2479176 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 117728 kB' 'Mapped: 48060 kB' 'Shmem: 10468 kB' 'KReclaimable: 85672 kB' 'Slab: 167368 kB' 'SReclaimable: 85672 kB' 'SUnreclaim: 81696 kB' 'KernelStack: 6368 kB' 'PageTables: 3716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55000 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.143 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.144 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6845080 kB' 'MemAvailable: 9492360 kB' 'Buffers: 2436 kB' 'Cached: 2849416 kB' 'SwapCached: 0 kB' 'Active: 488668 kB' 'Inactive: 2479176 kB' 'Active(anon): 126460 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2479176 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 117612 kB' 'Mapped: 48060 kB' 'Shmem: 10468 kB' 'KReclaimable: 85672 kB' 'Slab: 167368 kB' 'SReclaimable: 85672 kB' 'SUnreclaim: 81696 kB' 'KernelStack: 6368 kB' 'PageTables: 3716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55016 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:48.145 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.146 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:30:48.147 nr_hugepages=1024 00:30:48.147 resv_hugepages=0 00:30:48.147 surplus_hugepages=0 00:30:48.147 anon_hugepages=0 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6846396 kB' 'MemAvailable: 9493676 kB' 'Buffers: 2436 kB' 'Cached: 2849416 kB' 'SwapCached: 0 kB' 'Active: 488596 kB' 'Inactive: 2479176 kB' 'Active(anon): 126388 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2479176 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 117544 kB' 'Mapped: 48060 kB' 'Shmem: 10468 kB' 'KReclaimable: 85672 kB' 'Slab: 167368 kB' 'SReclaimable: 85672 kB' 'SUnreclaim: 81696 kB' 'KernelStack: 6368 kB' 'PageTables: 3720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55016 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.147 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.148 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6846396 kB' 'MemUsed: 5395580 kB' 'SwapCached: 0 kB' 'Active: 488584 kB' 'Inactive: 2479176 kB' 'Active(anon): 126376 kB' 'Inactive(anon): 0 kB' 'Active(file): 362208 kB' 'Inactive(file): 2479176 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'FilePages: 2851852 kB' 'Mapped: 48060 kB' 'AnonPages: 117576 kB' 'Shmem: 10468 kB' 'KernelStack: 6368 kB' 'PageTables: 3724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85672 kB' 'Slab: 167368 kB' 'SReclaimable: 85672 kB' 'SUnreclaim: 81696 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.149 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:30:48.150 node0=1024 expecting 1024 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:30:48.150 ************************************ 00:30:48.150 END TEST no_shrink_alloc 00:30:48.150 ************************************ 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:30:48.150 00:30:48.150 real 0m1.356s 00:30:48.150 user 0m0.623s 00:30:48.150 sys 0m0.772s 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:48.150 14:44:07 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:30:48.408 14:44:07 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:30:48.408 14:44:07 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:30:48.408 14:44:07 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:30:48.408 14:44:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:30:48.408 14:44:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:30:48.408 14:44:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:30:48.408 14:44:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:30:48.408 14:44:07 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:30:48.408 14:44:07 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:30:48.408 00:30:48.408 real 0m5.668s 00:30:48.408 user 0m2.571s 00:30:48.408 sys 0m3.233s 00:30:48.408 14:44:07 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:48.408 14:44:07 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:30:48.408 ************************************ 00:30:48.408 END TEST hugepages 00:30:48.408 ************************************ 00:30:48.409 14:44:07 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:30:48.409 14:44:07 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:48.409 14:44:07 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:48.409 14:44:07 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:30:48.409 ************************************ 00:30:48.409 START TEST driver 00:30:48.409 ************************************ 00:30:48.409 14:44:07 setup.sh.driver -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:30:48.409 * Looking for test storage... 00:30:48.409 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:30:48.409 14:44:07 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:30:48.409 14:44:07 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:30:48.409 14:44:07 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:49.344 14:44:08 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:30:49.344 14:44:08 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:49.344 14:44:08 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:49.344 14:44:08 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:30:49.344 ************************************ 00:30:49.344 START TEST guess_driver 00:30:49.344 ************************************ 00:30:49.344 14:44:08 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:30:49.344 14:44:08 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:30:49.344 14:44:08 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:30:49.344 14:44:08 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:30:49.344 14:44:08 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:30:49.344 14:44:08 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:30:49.344 14:44:08 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:30:49.344 14:44:08 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:30:49.344 14:44:08 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:30:49.344 14:44:08 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:30:49.344 14:44:08 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:30:49.344 14:44:08 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:30:49.344 14:44:08 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:30:49.344 14:44:08 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:30:49.344 14:44:08 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:30:49.344 14:44:08 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:30:49.344 14:44:08 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:30:49.344 14:44:08 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:30:49.344 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:30:49.344 14:44:08 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:30:49.344 Looking for driver=uio_pci_generic 00:30:49.344 14:44:08 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:30:49.344 14:44:08 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:30:49.344 14:44:08 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:30:49.344 14:44:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:30:49.344 14:44:08 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:30:49.344 14:44:08 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:30:49.344 14:44:08 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:30:49.910 14:44:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:30:49.910 14:44:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:30:49.910 14:44:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:30:49.910 14:44:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:30:49.910 14:44:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:30:49.910 14:44:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:30:49.910 14:44:09 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:30:49.910 14:44:09 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:30:49.910 14:44:09 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:30:50.168 14:44:09 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:30:50.168 14:44:09 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:30:50.168 14:44:09 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:30:50.168 14:44:09 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:50.734 ************************************ 00:30:50.734 END TEST guess_driver 00:30:50.734 ************************************ 00:30:50.734 00:30:50.734 real 0m1.519s 00:30:50.734 user 0m0.554s 00:30:50.734 sys 0m0.996s 00:30:50.734 14:44:10 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:50.734 14:44:10 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:30:50.734 ************************************ 00:30:50.734 END TEST driver 00:30:50.734 ************************************ 00:30:50.734 00:30:50.734 real 0m2.351s 00:30:50.734 user 0m0.814s 00:30:50.734 sys 0m1.648s 00:30:50.734 14:44:10 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:50.734 14:44:10 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:30:50.734 14:44:10 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:30:50.734 14:44:10 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:50.734 14:44:10 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:50.734 14:44:10 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:30:50.734 ************************************ 00:30:50.734 START TEST devices 00:30:50.734 ************************************ 00:30:50.734 14:44:10 setup.sh.devices -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:30:51.045 * Looking for test storage... 00:30:51.045 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:30:51.045 14:44:10 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:30:51.045 14:44:10 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:30:51.045 14:44:10 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:30:51.045 14:44:10 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:51.979 14:44:11 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:30:51.979 14:44:11 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:30:51.979 14:44:11 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:30:51.979 14:44:11 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:30:51.979 14:44:11 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:30:51.979 14:44:11 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:30:51.979 14:44:11 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:30:51.979 14:44:11 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:51.979 14:44:11 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:30:51.979 14:44:11 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:30:51.979 14:44:11 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n2 00:30:51.979 14:44:11 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:30:51.979 14:44:11 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:30:51.979 14:44:11 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:30:51.979 14:44:11 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:30:51.979 14:44:11 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n3 00:30:51.979 14:44:11 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:30:51.979 14:44:11 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:30:51.979 14:44:11 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:30:51.979 14:44:11 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:30:51.979 14:44:11 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:30:51.979 14:44:11 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:30:51.980 14:44:11 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:30:51.980 14:44:11 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:30:51.980 14:44:11 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:30:51.980 14:44:11 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:30:51.980 14:44:11 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:30:51.980 14:44:11 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:30:51.980 14:44:11 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:30:51.980 14:44:11 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:30:51.980 14:44:11 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:30:51.980 14:44:11 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:30:51.980 14:44:11 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:30:51.980 14:44:11 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:30:51.980 14:44:11 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:30:51.980 14:44:11 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:30:51.980 14:44:11 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:30:51.980 No valid GPT data, bailing 00:30:51.980 14:44:11 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:51.980 14:44:11 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:30:51.980 14:44:11 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:30:51.980 14:44:11 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:30:51.980 14:44:11 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:30:51.980 14:44:11 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:51.980 14:44:11 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:30:51.980 14:44:11 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:30:51.980 14:44:11 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:30:51.980 14:44:11 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:30:51.980 14:44:11 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:30:51.980 14:44:11 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:30:51.980 14:44:11 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:30:51.980 14:44:11 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:30:51.980 14:44:11 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:30:51.980 14:44:11 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:30:51.980 14:44:11 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:30:51.980 14:44:11 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:30:51.980 No valid GPT data, bailing 00:30:51.980 14:44:11 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:30:51.980 14:44:11 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:30:51.980 14:44:11 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:30:51.980 14:44:11 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:30:51.980 14:44:11 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:30:51.980 14:44:11 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:30:51.980 14:44:11 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:30:51.980 14:44:11 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:30:51.980 14:44:11 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:30:51.980 14:44:11 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:30:51.980 14:44:11 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:30:51.980 14:44:11 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:30:51.980 14:44:11 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:30:51.980 14:44:11 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:30:51.980 14:44:11 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:30:51.980 14:44:11 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:30:51.980 14:44:11 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:30:51.980 14:44:11 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:30:51.980 No valid GPT data, bailing 00:30:51.980 14:44:11 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:30:51.980 14:44:11 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:30:51.980 14:44:11 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:30:51.980 14:44:11 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:30:51.980 14:44:11 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:30:51.980 14:44:11 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:30:51.980 14:44:11 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:30:51.980 14:44:11 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:30:51.980 14:44:11 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:30:51.980 14:44:11 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:30:51.980 14:44:11 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:30:51.980 14:44:11 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:30:51.980 14:44:11 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:30:51.980 14:44:11 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:30:51.980 14:44:11 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:30:51.980 14:44:11 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:30:51.980 14:44:11 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:30:51.980 14:44:11 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:30:51.980 No valid GPT data, bailing 00:30:51.980 14:44:11 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:30:51.980 14:44:11 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:30:51.980 14:44:11 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:30:51.980 14:44:11 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:30:51.980 14:44:11 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:30:51.980 14:44:11 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:30:51.980 14:44:11 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:30:51.980 14:44:11 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:30:51.980 14:44:11 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:30:51.980 14:44:11 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:30:51.980 14:44:11 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:30:51.980 14:44:11 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:30:51.980 14:44:11 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:30:51.980 14:44:11 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:51.980 14:44:11 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:51.980 14:44:11 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:30:51.980 ************************************ 00:30:51.980 START TEST nvme_mount 00:30:51.980 ************************************ 00:30:51.980 14:44:11 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:30:51.980 14:44:11 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:30:51.980 14:44:11 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:30:51.980 14:44:11 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:30:51.980 14:44:11 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:30:51.980 14:44:11 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:30:51.980 14:44:11 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:30:51.980 14:44:11 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:30:51.980 14:44:11 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:30:51.980 14:44:11 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:30:51.980 14:44:11 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:30:51.980 14:44:11 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:30:51.980 14:44:11 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:30:51.980 14:44:11 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:30:51.980 14:44:11 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:30:51.980 14:44:11 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:30:51.980 14:44:11 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:30:51.980 14:44:11 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:30:51.980 14:44:11 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:30:51.980 14:44:11 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:30:53.353 Creating new GPT entries in memory. 00:30:53.353 GPT data structures destroyed! You may now partition the disk using fdisk or 00:30:53.353 other utilities. 00:30:53.354 14:44:12 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:30:53.354 14:44:12 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:30:53.354 14:44:12 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:30:53.354 14:44:12 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:30:53.354 14:44:12 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:30:54.286 Creating new GPT entries in memory. 00:30:54.286 The operation has completed successfully. 00:30:54.286 14:44:13 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:30:54.286 14:44:13 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:30:54.287 14:44:13 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 70972 00:30:54.287 14:44:13 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:30:54.287 14:44:13 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:30:54.287 14:44:13 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:30:54.287 14:44:13 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:30:54.287 14:44:13 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:30:54.287 14:44:13 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:30:54.287 14:44:13 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:30:54.287 14:44:13 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:30:54.287 14:44:13 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:30:54.287 14:44:13 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:30:54.287 14:44:13 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:30:54.287 14:44:13 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:30:54.287 14:44:13 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:30:54.287 14:44:13 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:30:54.287 14:44:13 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:30:54.287 14:44:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:30:54.287 14:44:13 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:30:54.287 14:44:13 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:30:54.287 14:44:13 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:30:54.287 14:44:13 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:30:54.547 14:44:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:30:54.547 14:44:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:30:54.547 14:44:14 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:30:54.547 14:44:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:30:54.547 14:44:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:30:54.547 14:44:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:30:54.547 14:44:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:30:54.547 14:44:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:30:54.806 14:44:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:30:54.806 14:44:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:30:54.806 14:44:14 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:30:54.806 14:44:14 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:30:54.806 14:44:14 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:30:54.806 14:44:14 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:30:54.806 14:44:14 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:30:54.806 14:44:14 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:30:54.806 14:44:14 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:30:54.806 14:44:14 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:30:54.806 14:44:14 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:30:54.806 14:44:14 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:30:54.806 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:30:54.806 14:44:14 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:30:54.806 14:44:14 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:30:55.065 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:30:55.065 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:30:55.065 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:30:55.065 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:30:55.065 14:44:14 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:30:55.065 14:44:14 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:30:55.065 14:44:14 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:30:55.065 14:44:14 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:30:55.065 14:44:14 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:30:55.065 14:44:14 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:30:55.065 14:44:14 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:30:55.065 14:44:14 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:30:55.065 14:44:14 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:30:55.065 14:44:14 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:30:55.066 14:44:14 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:30:55.066 14:44:14 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:30:55.066 14:44:14 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:30:55.066 14:44:14 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:30:55.066 14:44:14 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:30:55.066 14:44:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:30:55.066 14:44:14 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:30:55.066 14:44:14 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:30:55.066 14:44:14 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:30:55.066 14:44:14 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:30:55.324 14:44:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:30:55.324 14:44:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:30:55.324 14:44:14 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:30:55.324 14:44:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:30:55.324 14:44:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:30:55.324 14:44:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:30:55.583 14:44:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:30:55.583 14:44:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:30:55.583 14:44:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:30:55.583 14:44:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:30:55.842 14:44:15 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:30:55.842 14:44:15 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:30:55.842 14:44:15 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:30:55.842 14:44:15 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:30:55.842 14:44:15 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:30:55.842 14:44:15 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:30:55.842 14:44:15 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:30:55.842 14:44:15 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:30:55.842 14:44:15 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:30:55.842 14:44:15 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:30:55.842 14:44:15 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:30:55.842 14:44:15 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:30:55.842 14:44:15 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:30:55.842 14:44:15 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:30:55.842 14:44:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:30:55.842 14:44:15 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:30:55.842 14:44:15 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:30:55.842 14:44:15 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:30:55.842 14:44:15 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:30:56.102 14:44:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:30:56.102 14:44:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:30:56.102 14:44:15 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:30:56.102 14:44:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:30:56.102 14:44:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:30:56.102 14:44:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:30:56.102 14:44:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:30:56.102 14:44:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:30:56.362 14:44:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:30:56.362 14:44:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:30:56.362 14:44:15 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:30:56.362 14:44:15 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:30:56.362 14:44:15 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:30:56.362 14:44:15 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:30:56.362 14:44:15 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:30:56.362 14:44:15 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:30:56.362 14:44:15 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:30:56.363 14:44:15 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:30:56.363 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:30:56.363 00:30:56.363 real 0m4.285s 00:30:56.363 user 0m0.749s 00:30:56.363 sys 0m1.281s 00:30:56.363 14:44:15 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:56.363 14:44:15 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:30:56.363 ************************************ 00:30:56.363 END TEST nvme_mount 00:30:56.363 ************************************ 00:30:56.363 14:44:15 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:30:56.363 14:44:15 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:56.363 14:44:15 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:56.363 14:44:15 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:30:56.363 ************************************ 00:30:56.363 START TEST dm_mount 00:30:56.363 ************************************ 00:30:56.363 14:44:15 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:30:56.363 14:44:15 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:30:56.363 14:44:15 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:30:56.363 14:44:15 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:30:56.363 14:44:15 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:30:56.363 14:44:15 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:30:56.363 14:44:15 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:30:56.363 14:44:15 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:30:56.363 14:44:15 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:30:56.363 14:44:15 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:30:56.363 14:44:15 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:30:56.363 14:44:15 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:30:56.363 14:44:15 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:30:56.363 14:44:15 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:30:56.363 14:44:15 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:30:56.363 14:44:15 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:30:56.363 14:44:15 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:30:56.363 14:44:15 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:30:56.363 14:44:15 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:30:56.363 14:44:15 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:30:56.363 14:44:15 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:30:56.363 14:44:15 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:30:57.743 Creating new GPT entries in memory. 00:30:57.743 GPT data structures destroyed! You may now partition the disk using fdisk or 00:30:57.743 other utilities. 00:30:57.743 14:44:16 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:30:57.743 14:44:16 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:30:57.743 14:44:16 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:30:57.743 14:44:16 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:30:57.743 14:44:16 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:30:58.681 Creating new GPT entries in memory. 00:30:58.681 The operation has completed successfully. 00:30:58.681 14:44:17 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:30:58.681 14:44:17 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:30:58.681 14:44:17 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:30:58.681 14:44:17 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:30:58.681 14:44:17 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:30:59.620 The operation has completed successfully. 00:30:59.620 14:44:19 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:30:59.620 14:44:19 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:30:59.620 14:44:19 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 71405 00:30:59.620 14:44:19 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:30:59.620 14:44:19 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:30:59.620 14:44:19 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:30:59.620 14:44:19 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:30:59.620 14:44:19 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:30:59.620 14:44:19 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:30:59.620 14:44:19 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:30:59.620 14:44:19 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:30:59.620 14:44:19 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:30:59.620 14:44:19 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:30:59.620 14:44:19 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:30:59.620 14:44:19 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:30:59.620 14:44:19 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:30:59.620 14:44:19 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:30:59.620 14:44:19 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:30:59.620 14:44:19 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:30:59.620 14:44:19 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:30:59.620 14:44:19 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:30:59.620 14:44:19 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:30:59.620 14:44:19 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:30:59.620 14:44:19 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:30:59.620 14:44:19 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:30:59.620 14:44:19 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:30:59.620 14:44:19 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:30:59.620 14:44:19 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:30:59.620 14:44:19 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:30:59.620 14:44:19 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:30:59.620 14:44:19 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:30:59.620 14:44:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:30:59.620 14:44:19 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:30:59.620 14:44:19 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:30:59.620 14:44:19 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:30:59.620 14:44:19 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:30:59.879 14:44:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:30:59.879 14:44:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:30:59.879 14:44:19 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:30:59.879 14:44:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:30:59.879 14:44:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:30:59.879 14:44:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:31:00.138 14:44:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:31:00.138 14:44:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:31:00.138 14:44:19 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:31:00.138 14:44:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:31:00.139 14:44:19 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:31:00.139 14:44:19 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:31:00.139 14:44:19 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:31:00.415 14:44:19 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:31:00.415 14:44:19 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:31:00.415 14:44:19 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:31:00.415 14:44:19 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:31:00.415 14:44:19 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:31:00.415 14:44:19 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:31:00.415 14:44:19 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:31:00.415 14:44:19 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:31:00.415 14:44:19 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:31:00.415 14:44:19 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:31:00.415 14:44:19 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:31:00.415 14:44:19 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:31:00.415 14:44:19 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:31:00.415 14:44:19 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:31:00.415 14:44:19 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:31:00.415 14:44:19 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:31:00.680 14:44:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:31:00.680 14:44:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:31:00.680 14:44:20 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:31:00.680 14:44:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:31:00.680 14:44:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:31:00.680 14:44:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:31:00.680 14:44:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:31:00.680 14:44:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:31:00.945 14:44:20 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:31:00.945 14:44:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:31:00.945 14:44:20 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:31:00.945 14:44:20 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:31:00.945 14:44:20 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:31:00.945 14:44:20 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:31:00.945 14:44:20 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:31:00.945 14:44:20 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:31:00.945 14:44:20 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:31:00.945 14:44:20 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:31:00.945 14:44:20 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:31:00.945 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:31:00.945 14:44:20 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:31:00.945 14:44:20 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:31:00.945 00:31:00.945 real 0m4.586s 00:31:00.945 user 0m0.523s 00:31:00.945 sys 0m1.027s 00:31:00.945 14:44:20 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:00.945 14:44:20 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:31:00.945 ************************************ 00:31:00.945 END TEST dm_mount 00:31:00.945 ************************************ 00:31:00.945 14:44:20 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:31:00.945 14:44:20 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:31:00.945 14:44:20 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:31:01.208 14:44:20 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:31:01.208 14:44:20 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:31:01.208 14:44:20 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:31:01.208 14:44:20 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:31:01.467 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:31:01.467 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:31:01.467 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:31:01.467 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:31:01.467 14:44:20 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:31:01.467 14:44:20 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:31:01.467 14:44:20 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:31:01.467 14:44:20 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:31:01.467 14:44:20 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:31:01.467 14:44:20 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:31:01.467 14:44:20 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:31:01.467 00:31:01.467 real 0m10.591s 00:31:01.467 user 0m1.941s 00:31:01.467 sys 0m3.088s 00:31:01.467 14:44:20 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:01.467 14:44:20 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:31:01.467 ************************************ 00:31:01.467 END TEST devices 00:31:01.467 ************************************ 00:31:01.467 00:31:01.467 real 0m24.172s 00:31:01.467 user 0m7.643s 00:31:01.467 sys 0m11.247s 00:31:01.467 14:44:20 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:01.467 14:44:20 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:31:01.467 ************************************ 00:31:01.467 END TEST setup.sh 00:31:01.467 ************************************ 00:31:01.467 14:44:20 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:31:02.404 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:02.405 Hugepages 00:31:02.405 node hugesize free / total 00:31:02.405 node0 1048576kB 0 / 0 00:31:02.405 node0 2048kB 2048 / 2048 00:31:02.405 00:31:02.405 Type BDF Vendor Device NUMA Driver Device Block devices 00:31:02.405 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:31:02.405 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:31:02.405 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:31:02.405 14:44:22 -- spdk/autotest.sh@130 -- # uname -s 00:31:02.405 14:44:22 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:31:02.405 14:44:22 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:31:02.405 14:44:22 -- common/autotest_common.sh@1527 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:31:03.341 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:03.341 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:31:03.341 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:31:03.620 14:44:23 -- common/autotest_common.sh@1528 -- # sleep 1 00:31:04.556 14:44:24 -- common/autotest_common.sh@1529 -- # bdfs=() 00:31:04.556 14:44:24 -- common/autotest_common.sh@1529 -- # local bdfs 00:31:04.556 14:44:24 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:31:04.556 14:44:24 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:31:04.556 14:44:24 -- common/autotest_common.sh@1509 -- # bdfs=() 00:31:04.556 14:44:24 -- common/autotest_common.sh@1509 -- # local bdfs 00:31:04.556 14:44:24 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:04.556 14:44:24 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:31:04.556 14:44:24 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:31:04.556 14:44:24 -- common/autotest_common.sh@1511 -- # (( 2 == 0 )) 00:31:04.556 14:44:24 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:31:04.556 14:44:24 -- common/autotest_common.sh@1532 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:05.123 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:05.123 Waiting for block devices as requested 00:31:05.123 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:31:05.123 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:31:05.381 14:44:24 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:31:05.381 14:44:24 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:31:05.381 14:44:24 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:31:05.381 14:44:24 -- common/autotest_common.sh@1498 -- # grep 0000:00:10.0/nvme/nvme 00:31:05.381 14:44:24 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:31:05.381 14:44:24 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:31:05.381 14:44:24 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:31:05.381 14:44:24 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme1 00:31:05.381 14:44:24 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme1 00:31:05.381 14:44:24 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme1 ]] 00:31:05.381 14:44:24 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme1 00:31:05.381 14:44:24 -- common/autotest_common.sh@1541 -- # grep oacs 00:31:05.381 14:44:24 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:31:05.381 14:44:24 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:31:05.381 14:44:24 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:31:05.381 14:44:24 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:31:05.381 14:44:24 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme1 00:31:05.381 14:44:24 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:31:05.381 14:44:24 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:31:05.381 14:44:24 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:31:05.381 14:44:24 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:31:05.381 14:44:24 -- common/autotest_common.sh@1553 -- # continue 00:31:05.381 14:44:24 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:31:05.381 14:44:24 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:31:05.381 14:44:24 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:31:05.381 14:44:24 -- common/autotest_common.sh@1498 -- # grep 0000:00:11.0/nvme/nvme 00:31:05.381 14:44:24 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:31:05.381 14:44:24 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:31:05.381 14:44:24 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:31:05.381 14:44:24 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:31:05.381 14:44:24 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:31:05.381 14:44:24 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:31:05.381 14:44:24 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:31:05.381 14:44:24 -- common/autotest_common.sh@1541 -- # grep oacs 00:31:05.381 14:44:24 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:31:05.381 14:44:24 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:31:05.381 14:44:24 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:31:05.381 14:44:24 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:31:05.381 14:44:24 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:31:05.381 14:44:24 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:31:05.381 14:44:24 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:31:05.381 14:44:24 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:31:05.381 14:44:24 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:31:05.381 14:44:24 -- common/autotest_common.sh@1553 -- # continue 00:31:05.381 14:44:24 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:31:05.381 14:44:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:05.381 14:44:24 -- common/autotest_common.sh@10 -- # set +x 00:31:05.381 14:44:24 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:31:05.381 14:44:24 -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:05.381 14:44:24 -- common/autotest_common.sh@10 -- # set +x 00:31:05.381 14:44:24 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:31:06.318 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:06.319 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:31:06.319 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:31:06.319 14:44:25 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:31:06.319 14:44:25 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:06.319 14:44:25 -- common/autotest_common.sh@10 -- # set +x 00:31:06.319 14:44:25 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:31:06.319 14:44:25 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:31:06.319 14:44:25 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:31:06.319 14:44:25 -- common/autotest_common.sh@1573 -- # bdfs=() 00:31:06.319 14:44:25 -- common/autotest_common.sh@1573 -- # local bdfs 00:31:06.319 14:44:25 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:31:06.319 14:44:25 -- common/autotest_common.sh@1509 -- # bdfs=() 00:31:06.319 14:44:25 -- common/autotest_common.sh@1509 -- # local bdfs 00:31:06.319 14:44:25 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:06.319 14:44:25 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:31:06.319 14:44:25 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:31:06.576 14:44:26 -- common/autotest_common.sh@1511 -- # (( 2 == 0 )) 00:31:06.576 14:44:26 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:31:06.576 14:44:26 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:31:06.576 14:44:26 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:31:06.576 14:44:26 -- common/autotest_common.sh@1576 -- # device=0x0010 00:31:06.576 14:44:26 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:31:06.577 14:44:26 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:31:06.577 14:44:26 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:31:06.577 14:44:26 -- common/autotest_common.sh@1576 -- # device=0x0010 00:31:06.577 14:44:26 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:31:06.577 14:44:26 -- common/autotest_common.sh@1582 -- # printf '%s\n' 00:31:06.577 14:44:26 -- common/autotest_common.sh@1588 -- # [[ -z '' ]] 00:31:06.577 14:44:26 -- common/autotest_common.sh@1589 -- # return 0 00:31:06.577 14:44:26 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:31:06.577 14:44:26 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:31:06.577 14:44:26 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:31:06.577 14:44:26 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:31:06.577 14:44:26 -- spdk/autotest.sh@162 -- # timing_enter lib 00:31:06.577 14:44:26 -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:06.577 14:44:26 -- common/autotest_common.sh@10 -- # set +x 00:31:06.577 14:44:26 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:31:06.577 14:44:26 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:31:06.577 14:44:26 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:31:06.577 14:44:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:06.577 14:44:26 -- common/autotest_common.sh@10 -- # set +x 00:31:06.577 ************************************ 00:31:06.577 START TEST env 00:31:06.577 ************************************ 00:31:06.577 14:44:26 env -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:31:06.577 * Looking for test storage... 00:31:06.577 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:31:06.577 14:44:26 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:31:06.577 14:44:26 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:31:06.577 14:44:26 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:06.577 14:44:26 env -- common/autotest_common.sh@10 -- # set +x 00:31:06.577 ************************************ 00:31:06.577 START TEST env_memory 00:31:06.577 ************************************ 00:31:06.577 14:44:26 env.env_memory -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:31:06.835 00:31:06.835 00:31:06.835 CUnit - A unit testing framework for C - Version 2.1-3 00:31:06.835 http://cunit.sourceforge.net/ 00:31:06.835 00:31:06.835 00:31:06.835 Suite: memory 00:31:06.835 Test: alloc and free memory map ...[2024-07-22 14:44:26.240664] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:31:06.835 passed 00:31:06.835 Test: mem map translation ...[2024-07-22 14:44:26.262714] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:31:06.835 [2024-07-22 14:44:26.262750] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:31:06.835 [2024-07-22 14:44:26.262786] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:31:06.835 [2024-07-22 14:44:26.262792] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:31:06.835 passed 00:31:06.835 Test: mem map registration ...[2024-07-22 14:44:26.306615] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:31:06.835 [2024-07-22 14:44:26.306688] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:31:06.835 passed 00:31:06.835 Test: mem map adjacent registrations ...passed 00:31:06.835 00:31:06.835 Run Summary: Type Total Ran Passed Failed Inactive 00:31:06.835 suites 1 1 n/a 0 0 00:31:06.835 tests 4 4 4 0 0 00:31:06.835 asserts 152 152 152 0 n/a 00:31:06.835 00:31:06.835 Elapsed time = 0.160 seconds 00:31:06.835 00:31:06.835 real 0m0.188s 00:31:06.835 user 0m0.162s 00:31:06.835 sys 0m0.021s 00:31:06.835 14:44:26 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:06.835 14:44:26 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:31:06.835 ************************************ 00:31:06.835 END TEST env_memory 00:31:06.835 ************************************ 00:31:06.835 14:44:26 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:31:06.835 14:44:26 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:31:06.835 14:44:26 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:06.835 14:44:26 env -- common/autotest_common.sh@10 -- # set +x 00:31:06.835 ************************************ 00:31:06.835 START TEST env_vtophys 00:31:06.835 ************************************ 00:31:06.835 14:44:26 env.env_vtophys -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:31:06.835 EAL: lib.eal log level changed from notice to debug 00:31:06.835 EAL: Detected lcore 0 as core 0 on socket 0 00:31:06.835 EAL: Detected lcore 1 as core 0 on socket 0 00:31:06.835 EAL: Detected lcore 2 as core 0 on socket 0 00:31:06.835 EAL: Detected lcore 3 as core 0 on socket 0 00:31:06.835 EAL: Detected lcore 4 as core 0 on socket 0 00:31:06.835 EAL: Detected lcore 5 as core 0 on socket 0 00:31:06.835 EAL: Detected lcore 6 as core 0 on socket 0 00:31:06.835 EAL: Detected lcore 7 as core 0 on socket 0 00:31:06.835 EAL: Detected lcore 8 as core 0 on socket 0 00:31:06.835 EAL: Detected lcore 9 as core 0 on socket 0 00:31:07.094 EAL: Maximum logical cores by configuration: 128 00:31:07.094 EAL: Detected CPU lcores: 10 00:31:07.094 EAL: Detected NUMA nodes: 1 00:31:07.094 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:31:07.094 EAL: Detected shared linkage of DPDK 00:31:07.094 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:31:07.094 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:31:07.094 EAL: Registered [vdev] bus. 00:31:07.094 EAL: bus.vdev log level changed from disabled to notice 00:31:07.094 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:31:07.094 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:31:07.094 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:31:07.094 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:31:07.094 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:31:07.094 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:31:07.094 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:31:07.094 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:31:07.094 EAL: No shared files mode enabled, IPC will be disabled 00:31:07.094 EAL: No shared files mode enabled, IPC is disabled 00:31:07.094 EAL: Selected IOVA mode 'PA' 00:31:07.094 EAL: Probing VFIO support... 00:31:07.094 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:31:07.094 EAL: VFIO modules not loaded, skipping VFIO support... 00:31:07.094 EAL: Ask a virtual area of 0x2e000 bytes 00:31:07.094 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:31:07.094 EAL: Setting up physically contiguous memory... 00:31:07.094 EAL: Setting maximum number of open files to 524288 00:31:07.094 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:31:07.094 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:31:07.094 EAL: Ask a virtual area of 0x61000 bytes 00:31:07.094 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:31:07.094 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:31:07.094 EAL: Ask a virtual area of 0x400000000 bytes 00:31:07.094 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:31:07.094 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:31:07.094 EAL: Ask a virtual area of 0x61000 bytes 00:31:07.094 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:31:07.094 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:31:07.094 EAL: Ask a virtual area of 0x400000000 bytes 00:31:07.094 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:31:07.094 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:31:07.094 EAL: Ask a virtual area of 0x61000 bytes 00:31:07.094 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:31:07.094 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:31:07.094 EAL: Ask a virtual area of 0x400000000 bytes 00:31:07.094 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:31:07.094 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:31:07.094 EAL: Ask a virtual area of 0x61000 bytes 00:31:07.094 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:31:07.094 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:31:07.094 EAL: Ask a virtual area of 0x400000000 bytes 00:31:07.094 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:31:07.094 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:31:07.094 EAL: Hugepages will be freed exactly as allocated. 00:31:07.094 EAL: No shared files mode enabled, IPC is disabled 00:31:07.094 EAL: No shared files mode enabled, IPC is disabled 00:31:07.094 EAL: TSC frequency is ~2290000 KHz 00:31:07.094 EAL: Main lcore 0 is ready (tid=7f83286c5a00;cpuset=[0]) 00:31:07.094 EAL: Trying to obtain current memory policy. 00:31:07.094 EAL: Setting policy MPOL_PREFERRED for socket 0 00:31:07.094 EAL: Restoring previous memory policy: 0 00:31:07.094 EAL: request: mp_malloc_sync 00:31:07.094 EAL: No shared files mode enabled, IPC is disabled 00:31:07.094 EAL: Heap on socket 0 was expanded by 2MB 00:31:07.094 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:31:07.094 EAL: No shared files mode enabled, IPC is disabled 00:31:07.094 EAL: No PCI address specified using 'addr=' in: bus=pci 00:31:07.094 EAL: Mem event callback 'spdk:(nil)' registered 00:31:07.094 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:31:07.094 00:31:07.094 00:31:07.094 CUnit - A unit testing framework for C - Version 2.1-3 00:31:07.094 http://cunit.sourceforge.net/ 00:31:07.094 00:31:07.094 00:31:07.094 Suite: components_suite 00:31:07.094 Test: vtophys_malloc_test ...passed 00:31:07.094 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:31:07.094 EAL: Setting policy MPOL_PREFERRED for socket 0 00:31:07.094 EAL: Restoring previous memory policy: 4 00:31:07.094 EAL: Calling mem event callback 'spdk:(nil)' 00:31:07.094 EAL: request: mp_malloc_sync 00:31:07.094 EAL: No shared files mode enabled, IPC is disabled 00:31:07.094 EAL: Heap on socket 0 was expanded by 4MB 00:31:07.094 EAL: Calling mem event callback 'spdk:(nil)' 00:31:07.094 EAL: request: mp_malloc_sync 00:31:07.094 EAL: No shared files mode enabled, IPC is disabled 00:31:07.094 EAL: Heap on socket 0 was shrunk by 4MB 00:31:07.094 EAL: Trying to obtain current memory policy. 00:31:07.094 EAL: Setting policy MPOL_PREFERRED for socket 0 00:31:07.094 EAL: Restoring previous memory policy: 4 00:31:07.094 EAL: Calling mem event callback 'spdk:(nil)' 00:31:07.094 EAL: request: mp_malloc_sync 00:31:07.094 EAL: No shared files mode enabled, IPC is disabled 00:31:07.094 EAL: Heap on socket 0 was expanded by 6MB 00:31:07.094 EAL: Calling mem event callback 'spdk:(nil)' 00:31:07.094 EAL: request: mp_malloc_sync 00:31:07.094 EAL: No shared files mode enabled, IPC is disabled 00:31:07.094 EAL: Heap on socket 0 was shrunk by 6MB 00:31:07.094 EAL: Trying to obtain current memory policy. 00:31:07.094 EAL: Setting policy MPOL_PREFERRED for socket 0 00:31:07.094 EAL: Restoring previous memory policy: 4 00:31:07.094 EAL: Calling mem event callback 'spdk:(nil)' 00:31:07.094 EAL: request: mp_malloc_sync 00:31:07.094 EAL: No shared files mode enabled, IPC is disabled 00:31:07.094 EAL: Heap on socket 0 was expanded by 10MB 00:31:07.094 EAL: Calling mem event callback 'spdk:(nil)' 00:31:07.095 EAL: request: mp_malloc_sync 00:31:07.095 EAL: No shared files mode enabled, IPC is disabled 00:31:07.095 EAL: Heap on socket 0 was shrunk by 10MB 00:31:07.095 EAL: Trying to obtain current memory policy. 00:31:07.095 EAL: Setting policy MPOL_PREFERRED for socket 0 00:31:07.095 EAL: Restoring previous memory policy: 4 00:31:07.095 EAL: Calling mem event callback 'spdk:(nil)' 00:31:07.095 EAL: request: mp_malloc_sync 00:31:07.095 EAL: No shared files mode enabled, IPC is disabled 00:31:07.095 EAL: Heap on socket 0 was expanded by 18MB 00:31:07.095 EAL: Calling mem event callback 'spdk:(nil)' 00:31:07.095 EAL: request: mp_malloc_sync 00:31:07.095 EAL: No shared files mode enabled, IPC is disabled 00:31:07.095 EAL: Heap on socket 0 was shrunk by 18MB 00:31:07.095 EAL: Trying to obtain current memory policy. 00:31:07.095 EAL: Setting policy MPOL_PREFERRED for socket 0 00:31:07.095 EAL: Restoring previous memory policy: 4 00:31:07.095 EAL: Calling mem event callback 'spdk:(nil)' 00:31:07.095 EAL: request: mp_malloc_sync 00:31:07.095 EAL: No shared files mode enabled, IPC is disabled 00:31:07.095 EAL: Heap on socket 0 was expanded by 34MB 00:31:07.095 EAL: Calling mem event callback 'spdk:(nil)' 00:31:07.095 EAL: request: mp_malloc_sync 00:31:07.095 EAL: No shared files mode enabled, IPC is disabled 00:31:07.095 EAL: Heap on socket 0 was shrunk by 34MB 00:31:07.095 EAL: Trying to obtain current memory policy. 00:31:07.095 EAL: Setting policy MPOL_PREFERRED for socket 0 00:31:07.095 EAL: Restoring previous memory policy: 4 00:31:07.095 EAL: Calling mem event callback 'spdk:(nil)' 00:31:07.095 EAL: request: mp_malloc_sync 00:31:07.095 EAL: No shared files mode enabled, IPC is disabled 00:31:07.095 EAL: Heap on socket 0 was expanded by 66MB 00:31:07.095 EAL: Calling mem event callback 'spdk:(nil)' 00:31:07.095 EAL: request: mp_malloc_sync 00:31:07.095 EAL: No shared files mode enabled, IPC is disabled 00:31:07.095 EAL: Heap on socket 0 was shrunk by 66MB 00:31:07.095 EAL: Trying to obtain current memory policy. 00:31:07.095 EAL: Setting policy MPOL_PREFERRED for socket 0 00:31:07.095 EAL: Restoring previous memory policy: 4 00:31:07.095 EAL: Calling mem event callback 'spdk:(nil)' 00:31:07.095 EAL: request: mp_malloc_sync 00:31:07.095 EAL: No shared files mode enabled, IPC is disabled 00:31:07.095 EAL: Heap on socket 0 was expanded by 130MB 00:31:07.095 EAL: Calling mem event callback 'spdk:(nil)' 00:31:07.095 EAL: request: mp_malloc_sync 00:31:07.095 EAL: No shared files mode enabled, IPC is disabled 00:31:07.095 EAL: Heap on socket 0 was shrunk by 130MB 00:31:07.095 EAL: Trying to obtain current memory policy. 00:31:07.095 EAL: Setting policy MPOL_PREFERRED for socket 0 00:31:07.354 EAL: Restoring previous memory policy: 4 00:31:07.354 EAL: Calling mem event callback 'spdk:(nil)' 00:31:07.354 EAL: request: mp_malloc_sync 00:31:07.354 EAL: No shared files mode enabled, IPC is disabled 00:31:07.354 EAL: Heap on socket 0 was expanded by 258MB 00:31:07.354 EAL: Calling mem event callback 'spdk:(nil)' 00:31:07.354 EAL: request: mp_malloc_sync 00:31:07.354 EAL: No shared files mode enabled, IPC is disabled 00:31:07.354 EAL: Heap on socket 0 was shrunk by 258MB 00:31:07.354 EAL: Trying to obtain current memory policy. 00:31:07.354 EAL: Setting policy MPOL_PREFERRED for socket 0 00:31:07.354 EAL: Restoring previous memory policy: 4 00:31:07.354 EAL: Calling mem event callback 'spdk:(nil)' 00:31:07.354 EAL: request: mp_malloc_sync 00:31:07.354 EAL: No shared files mode enabled, IPC is disabled 00:31:07.354 EAL: Heap on socket 0 was expanded by 514MB 00:31:07.613 EAL: Calling mem event callback 'spdk:(nil)' 00:31:07.613 EAL: request: mp_malloc_sync 00:31:07.613 EAL: No shared files mode enabled, IPC is disabled 00:31:07.613 EAL: Heap on socket 0 was shrunk by 514MB 00:31:07.613 EAL: Trying to obtain current memory policy. 00:31:07.613 EAL: Setting policy MPOL_PREFERRED for socket 0 00:31:07.873 EAL: Restoring previous memory policy: 4 00:31:07.873 EAL: Calling mem event callback 'spdk:(nil)' 00:31:07.873 EAL: request: mp_malloc_sync 00:31:07.873 EAL: No shared files mode enabled, IPC is disabled 00:31:07.873 EAL: Heap on socket 0 was expanded by 1026MB 00:31:07.873 EAL: Calling mem event callback 'spdk:(nil)' 00:31:08.133 passed 00:31:08.133 00:31:08.133 Run Summary: Type Total Ran Passed Failed Inactive 00:31:08.133 suites 1 1 n/a 0 0 00:31:08.133 tests 2 2 2 0 0 00:31:08.133 asserts 5190 5190 5190 0 n/a 00:31:08.133 00:31:08.133 Elapsed time = 0.998 seconds 00:31:08.133 EAL: request: mp_malloc_sync 00:31:08.133 EAL: No shared files mode enabled, IPC is disabled 00:31:08.133 EAL: Heap on socket 0 was shrunk by 1026MB 00:31:08.133 EAL: Calling mem event callback 'spdk:(nil)' 00:31:08.133 EAL: request: mp_malloc_sync 00:31:08.133 EAL: No shared files mode enabled, IPC is disabled 00:31:08.133 EAL: Heap on socket 0 was shrunk by 2MB 00:31:08.133 EAL: No shared files mode enabled, IPC is disabled 00:31:08.133 EAL: No shared files mode enabled, IPC is disabled 00:31:08.133 EAL: No shared files mode enabled, IPC is disabled 00:31:08.133 00:31:08.133 real 0m1.196s 00:31:08.133 user 0m0.635s 00:31:08.133 sys 0m0.433s 00:31:08.133 14:44:27 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:08.133 14:44:27 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:31:08.133 ************************************ 00:31:08.133 END TEST env_vtophys 00:31:08.133 ************************************ 00:31:08.133 14:44:27 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:31:08.133 14:44:27 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:31:08.133 14:44:27 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:08.133 14:44:27 env -- common/autotest_common.sh@10 -- # set +x 00:31:08.133 ************************************ 00:31:08.133 START TEST env_pci 00:31:08.133 ************************************ 00:31:08.133 14:44:27 env.env_pci -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:31:08.133 00:31:08.133 00:31:08.133 CUnit - A unit testing framework for C - Version 2.1-3 00:31:08.133 http://cunit.sourceforge.net/ 00:31:08.133 00:31:08.133 00:31:08.133 Suite: pci 00:31:08.133 Test: pci_hook ...[2024-07-22 14:44:27.714841] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 72609 has claimed it 00:31:08.133 passed 00:31:08.133 00:31:08.133 Run Summary: Type Total Ran Passed Failed Inactive 00:31:08.133 suites 1 1 n/a 0 0 00:31:08.133 tests 1 1 1 0 0 00:31:08.133 asserts 25 25 25 0 n/a 00:31:08.133 00:31:08.133 Elapsed time = 0.003 seconds 00:31:08.133 EAL: Cannot find device (10000:00:01.0) 00:31:08.133 EAL: Failed to attach device on primary process 00:31:08.133 00:31:08.133 real 0m0.027s 00:31:08.133 user 0m0.013s 00:31:08.133 sys 0m0.014s 00:31:08.133 14:44:27 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:08.133 14:44:27 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:31:08.133 ************************************ 00:31:08.133 END TEST env_pci 00:31:08.133 ************************************ 00:31:08.393 14:44:27 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:31:08.393 14:44:27 env -- env/env.sh@15 -- # uname 00:31:08.393 14:44:27 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:31:08.393 14:44:27 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:31:08.393 14:44:27 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:31:08.393 14:44:27 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:31:08.393 14:44:27 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:08.393 14:44:27 env -- common/autotest_common.sh@10 -- # set +x 00:31:08.393 ************************************ 00:31:08.393 START TEST env_dpdk_post_init 00:31:08.393 ************************************ 00:31:08.393 14:44:27 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:31:08.393 EAL: Detected CPU lcores: 10 00:31:08.393 EAL: Detected NUMA nodes: 1 00:31:08.393 EAL: Detected shared linkage of DPDK 00:31:08.393 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:31:08.393 EAL: Selected IOVA mode 'PA' 00:31:08.393 TELEMETRY: No legacy callbacks, legacy socket not created 00:31:08.393 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:31:08.393 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:31:08.393 Starting DPDK initialization... 00:31:08.393 Starting SPDK post initialization... 00:31:08.393 SPDK NVMe probe 00:31:08.393 Attaching to 0000:00:10.0 00:31:08.393 Attaching to 0000:00:11.0 00:31:08.393 Attached to 0000:00:10.0 00:31:08.393 Attached to 0000:00:11.0 00:31:08.393 Cleaning up... 00:31:08.393 00:31:08.393 real 0m0.192s 00:31:08.393 user 0m0.051s 00:31:08.393 sys 0m0.041s 00:31:08.393 14:44:27 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:08.393 14:44:27 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:31:08.393 ************************************ 00:31:08.393 END TEST env_dpdk_post_init 00:31:08.393 ************************************ 00:31:08.652 14:44:28 env -- env/env.sh@26 -- # uname 00:31:08.652 14:44:28 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:31:08.652 14:44:28 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:31:08.652 14:44:28 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:31:08.652 14:44:28 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:08.652 14:44:28 env -- common/autotest_common.sh@10 -- # set +x 00:31:08.652 ************************************ 00:31:08.652 START TEST env_mem_callbacks 00:31:08.652 ************************************ 00:31:08.652 14:44:28 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:31:08.652 EAL: Detected CPU lcores: 10 00:31:08.652 EAL: Detected NUMA nodes: 1 00:31:08.652 EAL: Detected shared linkage of DPDK 00:31:08.652 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:31:08.652 EAL: Selected IOVA mode 'PA' 00:31:08.652 00:31:08.652 00:31:08.652 CUnit - A unit testing framework for C - Version 2.1-3 00:31:08.652 http://cunit.sourceforge.net/ 00:31:08.653 00:31:08.653 00:31:08.653 Suite: memory 00:31:08.653 Test: test ... 00:31:08.653 register 0x200000200000 2097152 00:31:08.653 malloc 3145728 00:31:08.653 TELEMETRY: No legacy callbacks, legacy socket not created 00:31:08.653 register 0x200000400000 4194304 00:31:08.653 buf 0x200000500000 len 3145728 PASSED 00:31:08.653 malloc 64 00:31:08.653 buf 0x2000004fff40 len 64 PASSED 00:31:08.653 malloc 4194304 00:31:08.653 register 0x200000800000 6291456 00:31:08.653 buf 0x200000a00000 len 4194304 PASSED 00:31:08.653 free 0x200000500000 3145728 00:31:08.653 free 0x2000004fff40 64 00:31:08.653 unregister 0x200000400000 4194304 PASSED 00:31:08.653 free 0x200000a00000 4194304 00:31:08.653 unregister 0x200000800000 6291456 PASSED 00:31:08.653 malloc 8388608 00:31:08.653 register 0x200000400000 10485760 00:31:08.653 buf 0x200000600000 len 8388608 PASSED 00:31:08.653 free 0x200000600000 8388608 00:31:08.653 unregister 0x200000400000 10485760 PASSED 00:31:08.653 passed 00:31:08.653 00:31:08.653 Run Summary: Type Total Ran Passed Failed Inactive 00:31:08.653 suites 1 1 n/a 0 0 00:31:08.653 tests 1 1 1 0 0 00:31:08.653 asserts 15 15 15 0 n/a 00:31:08.653 00:31:08.653 Elapsed time = 0.010 seconds 00:31:08.653 00:31:08.653 real 0m0.147s 00:31:08.653 user 0m0.015s 00:31:08.653 sys 0m0.031s 00:31:08.653 14:44:28 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:08.653 14:44:28 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:31:08.653 ************************************ 00:31:08.653 END TEST env_mem_callbacks 00:31:08.653 ************************************ 00:31:08.653 00:31:08.653 real 0m2.189s 00:31:08.653 user 0m1.011s 00:31:08.653 sys 0m0.855s 00:31:08.653 14:44:28 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:08.653 14:44:28 env -- common/autotest_common.sh@10 -- # set +x 00:31:08.653 ************************************ 00:31:08.653 END TEST env 00:31:08.653 ************************************ 00:31:08.911 14:44:28 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:31:08.912 14:44:28 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:31:08.912 14:44:28 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:08.912 14:44:28 -- common/autotest_common.sh@10 -- # set +x 00:31:08.912 ************************************ 00:31:08.912 START TEST rpc 00:31:08.912 ************************************ 00:31:08.912 14:44:28 rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:31:08.912 * Looking for test storage... 00:31:08.912 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:31:08.912 14:44:28 rpc -- rpc/rpc.sh@65 -- # spdk_pid=72719 00:31:08.912 14:44:28 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:31:08.912 14:44:28 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:31:08.912 14:44:28 rpc -- rpc/rpc.sh@67 -- # waitforlisten 72719 00:31:08.912 14:44:28 rpc -- common/autotest_common.sh@827 -- # '[' -z 72719 ']' 00:31:08.912 14:44:28 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:08.912 14:44:28 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:08.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:08.912 14:44:28 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:08.912 14:44:28 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:08.912 14:44:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:31:08.912 [2024-07-22 14:44:28.490427] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:31:08.912 [2024-07-22 14:44:28.490508] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72719 ] 00:31:09.171 [2024-07-22 14:44:28.628007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:09.171 [2024-07-22 14:44:28.680616] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:31:09.171 [2024-07-22 14:44:28.680675] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 72719' to capture a snapshot of events at runtime. 00:31:09.171 [2024-07-22 14:44:28.680682] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:09.171 [2024-07-22 14:44:28.680687] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:09.171 [2024-07-22 14:44:28.680692] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid72719 for offline analysis/debug. 00:31:09.171 [2024-07-22 14:44:28.680714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:10.107 14:44:29 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:10.107 14:44:29 rpc -- common/autotest_common.sh@860 -- # return 0 00:31:10.107 14:44:29 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:31:10.107 14:44:29 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:31:10.107 14:44:29 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:31:10.107 14:44:29 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:31:10.107 14:44:29 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:31:10.107 14:44:29 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:10.107 14:44:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:31:10.107 ************************************ 00:31:10.107 START TEST rpc_integrity 00:31:10.107 ************************************ 00:31:10.107 14:44:29 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:31:10.107 14:44:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:31:10.107 14:44:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.107 14:44:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:31:10.107 14:44:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.107 14:44:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:31:10.107 14:44:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:31:10.107 14:44:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:31:10.107 14:44:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:31:10.107 14:44:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.107 14:44:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:31:10.107 14:44:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.108 14:44:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:31:10.108 14:44:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:31:10.108 14:44:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.108 14:44:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:31:10.108 14:44:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.108 14:44:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:31:10.108 { 00:31:10.108 "aliases": [ 00:31:10.108 "0b8583c7-bf3e-4044-82cf-bf275dc32e6b" 00:31:10.108 ], 00:31:10.108 "assigned_rate_limits": { 00:31:10.108 "r_mbytes_per_sec": 0, 00:31:10.108 "rw_ios_per_sec": 0, 00:31:10.108 "rw_mbytes_per_sec": 0, 00:31:10.108 "w_mbytes_per_sec": 0 00:31:10.108 }, 00:31:10.108 "block_size": 512, 00:31:10.108 "claimed": false, 00:31:10.108 "driver_specific": {}, 00:31:10.108 "memory_domains": [ 00:31:10.108 { 00:31:10.108 "dma_device_id": "system", 00:31:10.108 "dma_device_type": 1 00:31:10.108 }, 00:31:10.108 { 00:31:10.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:10.108 "dma_device_type": 2 00:31:10.108 } 00:31:10.108 ], 00:31:10.108 "name": "Malloc0", 00:31:10.108 "num_blocks": 16384, 00:31:10.108 "product_name": "Malloc disk", 00:31:10.108 "supported_io_types": { 00:31:10.108 "abort": true, 00:31:10.108 "compare": false, 00:31:10.108 "compare_and_write": false, 00:31:10.108 "flush": true, 00:31:10.108 "nvme_admin": false, 00:31:10.108 "nvme_io": false, 00:31:10.108 "read": true, 00:31:10.108 "reset": true, 00:31:10.108 "unmap": true, 00:31:10.108 "write": true, 00:31:10.108 "write_zeroes": true 00:31:10.108 }, 00:31:10.108 "uuid": "0b8583c7-bf3e-4044-82cf-bf275dc32e6b", 00:31:10.108 "zoned": false 00:31:10.108 } 00:31:10.108 ]' 00:31:10.108 14:44:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:31:10.108 14:44:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:31:10.108 14:44:29 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:31:10.108 14:44:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.108 14:44:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:31:10.108 [2024-07-22 14:44:29.553915] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:31:10.108 [2024-07-22 14:44:29.553967] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:10.108 [2024-07-22 14:44:29.553997] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xae9f10 00:31:10.108 [2024-07-22 14:44:29.554007] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:10.108 [2024-07-22 14:44:29.555481] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:10.108 [2024-07-22 14:44:29.555529] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:31:10.108 Passthru0 00:31:10.108 14:44:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.108 14:44:29 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:31:10.108 14:44:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.108 14:44:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:31:10.108 14:44:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.108 14:44:29 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:31:10.108 { 00:31:10.108 "aliases": [ 00:31:10.108 "0b8583c7-bf3e-4044-82cf-bf275dc32e6b" 00:31:10.108 ], 00:31:10.108 "assigned_rate_limits": { 00:31:10.108 "r_mbytes_per_sec": 0, 00:31:10.108 "rw_ios_per_sec": 0, 00:31:10.108 "rw_mbytes_per_sec": 0, 00:31:10.108 "w_mbytes_per_sec": 0 00:31:10.108 }, 00:31:10.108 "block_size": 512, 00:31:10.108 "claim_type": "exclusive_write", 00:31:10.108 "claimed": true, 00:31:10.108 "driver_specific": {}, 00:31:10.108 "memory_domains": [ 00:31:10.108 { 00:31:10.108 "dma_device_id": "system", 00:31:10.108 "dma_device_type": 1 00:31:10.108 }, 00:31:10.108 { 00:31:10.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:10.108 "dma_device_type": 2 00:31:10.108 } 00:31:10.108 ], 00:31:10.108 "name": "Malloc0", 00:31:10.108 "num_blocks": 16384, 00:31:10.108 "product_name": "Malloc disk", 00:31:10.108 "supported_io_types": { 00:31:10.108 "abort": true, 00:31:10.108 "compare": false, 00:31:10.108 "compare_and_write": false, 00:31:10.108 "flush": true, 00:31:10.108 "nvme_admin": false, 00:31:10.108 "nvme_io": false, 00:31:10.108 "read": true, 00:31:10.108 "reset": true, 00:31:10.108 "unmap": true, 00:31:10.108 "write": true, 00:31:10.108 "write_zeroes": true 00:31:10.108 }, 00:31:10.108 "uuid": "0b8583c7-bf3e-4044-82cf-bf275dc32e6b", 00:31:10.108 "zoned": false 00:31:10.108 }, 00:31:10.108 { 00:31:10.108 "aliases": [ 00:31:10.108 "5d5fccbe-615c-527a-bc61-2186b8d4bc6e" 00:31:10.108 ], 00:31:10.108 "assigned_rate_limits": { 00:31:10.108 "r_mbytes_per_sec": 0, 00:31:10.108 "rw_ios_per_sec": 0, 00:31:10.108 "rw_mbytes_per_sec": 0, 00:31:10.108 "w_mbytes_per_sec": 0 00:31:10.108 }, 00:31:10.108 "block_size": 512, 00:31:10.108 "claimed": false, 00:31:10.108 "driver_specific": { 00:31:10.108 "passthru": { 00:31:10.108 "base_bdev_name": "Malloc0", 00:31:10.108 "name": "Passthru0" 00:31:10.108 } 00:31:10.108 }, 00:31:10.108 "memory_domains": [ 00:31:10.108 { 00:31:10.108 "dma_device_id": "system", 00:31:10.108 "dma_device_type": 1 00:31:10.108 }, 00:31:10.108 { 00:31:10.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:10.108 "dma_device_type": 2 00:31:10.108 } 00:31:10.108 ], 00:31:10.108 "name": "Passthru0", 00:31:10.108 "num_blocks": 16384, 00:31:10.108 "product_name": "passthru", 00:31:10.108 "supported_io_types": { 00:31:10.108 "abort": true, 00:31:10.108 "compare": false, 00:31:10.108 "compare_and_write": false, 00:31:10.108 "flush": true, 00:31:10.108 "nvme_admin": false, 00:31:10.108 "nvme_io": false, 00:31:10.108 "read": true, 00:31:10.108 "reset": true, 00:31:10.108 "unmap": true, 00:31:10.108 "write": true, 00:31:10.108 "write_zeroes": true 00:31:10.108 }, 00:31:10.108 "uuid": "5d5fccbe-615c-527a-bc61-2186b8d4bc6e", 00:31:10.108 "zoned": false 00:31:10.108 } 00:31:10.108 ]' 00:31:10.108 14:44:29 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:31:10.108 14:44:29 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:31:10.108 14:44:29 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:31:10.108 14:44:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.108 14:44:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:31:10.108 14:44:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.108 14:44:29 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:31:10.108 14:44:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.108 14:44:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:31:10.108 14:44:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.108 14:44:29 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:31:10.108 14:44:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.108 14:44:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:31:10.108 14:44:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.108 14:44:29 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:31:10.108 14:44:29 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:31:10.108 14:44:29 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:31:10.108 00:31:10.108 real 0m0.295s 00:31:10.108 user 0m0.176s 00:31:10.108 sys 0m0.040s 00:31:10.108 14:44:29 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:10.108 14:44:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:31:10.108 ************************************ 00:31:10.108 END TEST rpc_integrity 00:31:10.108 ************************************ 00:31:10.368 14:44:29 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:31:10.368 14:44:29 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:31:10.368 14:44:29 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:10.368 14:44:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:31:10.368 ************************************ 00:31:10.368 START TEST rpc_plugins 00:31:10.368 ************************************ 00:31:10.368 14:44:29 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:31:10.368 14:44:29 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:31:10.368 14:44:29 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.368 14:44:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:31:10.368 14:44:29 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.368 14:44:29 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:31:10.368 14:44:29 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:31:10.368 14:44:29 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.368 14:44:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:31:10.368 14:44:29 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.368 14:44:29 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:31:10.368 { 00:31:10.368 "aliases": [ 00:31:10.368 "2219875c-ffcb-4870-b5ae-9908a02a9f35" 00:31:10.368 ], 00:31:10.368 "assigned_rate_limits": { 00:31:10.368 "r_mbytes_per_sec": 0, 00:31:10.368 "rw_ios_per_sec": 0, 00:31:10.368 "rw_mbytes_per_sec": 0, 00:31:10.368 "w_mbytes_per_sec": 0 00:31:10.369 }, 00:31:10.369 "block_size": 4096, 00:31:10.369 "claimed": false, 00:31:10.369 "driver_specific": {}, 00:31:10.369 "memory_domains": [ 00:31:10.369 { 00:31:10.369 "dma_device_id": "system", 00:31:10.369 "dma_device_type": 1 00:31:10.369 }, 00:31:10.369 { 00:31:10.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:10.369 "dma_device_type": 2 00:31:10.369 } 00:31:10.369 ], 00:31:10.369 "name": "Malloc1", 00:31:10.369 "num_blocks": 256, 00:31:10.369 "product_name": "Malloc disk", 00:31:10.369 "supported_io_types": { 00:31:10.369 "abort": true, 00:31:10.369 "compare": false, 00:31:10.369 "compare_and_write": false, 00:31:10.369 "flush": true, 00:31:10.369 "nvme_admin": false, 00:31:10.369 "nvme_io": false, 00:31:10.369 "read": true, 00:31:10.369 "reset": true, 00:31:10.369 "unmap": true, 00:31:10.369 "write": true, 00:31:10.369 "write_zeroes": true 00:31:10.369 }, 00:31:10.369 "uuid": "2219875c-ffcb-4870-b5ae-9908a02a9f35", 00:31:10.369 "zoned": false 00:31:10.369 } 00:31:10.369 ]' 00:31:10.369 14:44:29 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:31:10.369 14:44:29 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:31:10.369 14:44:29 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:31:10.369 14:44:29 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.369 14:44:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:31:10.369 14:44:29 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.369 14:44:29 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:31:10.369 14:44:29 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.369 14:44:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:31:10.369 14:44:29 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.369 14:44:29 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:31:10.369 14:44:29 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:31:10.369 14:44:29 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:31:10.369 00:31:10.369 real 0m0.153s 00:31:10.369 user 0m0.096s 00:31:10.369 sys 0m0.028s 00:31:10.369 14:44:29 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:10.369 14:44:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:31:10.369 ************************************ 00:31:10.369 END TEST rpc_plugins 00:31:10.369 ************************************ 00:31:10.369 14:44:29 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:31:10.369 14:44:29 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:31:10.369 14:44:29 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:10.369 14:44:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:31:10.369 ************************************ 00:31:10.369 START TEST rpc_trace_cmd_test 00:31:10.369 ************************************ 00:31:10.369 14:44:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:31:10.369 14:44:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:31:10.369 14:44:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:31:10.369 14:44:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.369 14:44:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:31:10.369 14:44:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.369 14:44:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:31:10.369 "bdev": { 00:31:10.369 "mask": "0x8", 00:31:10.369 "tpoint_mask": "0xffffffffffffffff" 00:31:10.369 }, 00:31:10.369 "bdev_nvme": { 00:31:10.369 "mask": "0x4000", 00:31:10.369 "tpoint_mask": "0x0" 00:31:10.369 }, 00:31:10.369 "blobfs": { 00:31:10.369 "mask": "0x80", 00:31:10.369 "tpoint_mask": "0x0" 00:31:10.369 }, 00:31:10.369 "dsa": { 00:31:10.369 "mask": "0x200", 00:31:10.369 "tpoint_mask": "0x0" 00:31:10.369 }, 00:31:10.369 "ftl": { 00:31:10.369 "mask": "0x40", 00:31:10.369 "tpoint_mask": "0x0" 00:31:10.369 }, 00:31:10.369 "iaa": { 00:31:10.369 "mask": "0x1000", 00:31:10.369 "tpoint_mask": "0x0" 00:31:10.369 }, 00:31:10.369 "iscsi_conn": { 00:31:10.369 "mask": "0x2", 00:31:10.369 "tpoint_mask": "0x0" 00:31:10.369 }, 00:31:10.369 "nvme_pcie": { 00:31:10.369 "mask": "0x800", 00:31:10.369 "tpoint_mask": "0x0" 00:31:10.369 }, 00:31:10.369 "nvme_tcp": { 00:31:10.369 "mask": "0x2000", 00:31:10.369 "tpoint_mask": "0x0" 00:31:10.369 }, 00:31:10.369 "nvmf_rdma": { 00:31:10.369 "mask": "0x10", 00:31:10.369 "tpoint_mask": "0x0" 00:31:10.369 }, 00:31:10.369 "nvmf_tcp": { 00:31:10.369 "mask": "0x20", 00:31:10.369 "tpoint_mask": "0x0" 00:31:10.369 }, 00:31:10.369 "scsi": { 00:31:10.369 "mask": "0x4", 00:31:10.369 "tpoint_mask": "0x0" 00:31:10.369 }, 00:31:10.369 "sock": { 00:31:10.369 "mask": "0x8000", 00:31:10.369 "tpoint_mask": "0x0" 00:31:10.369 }, 00:31:10.369 "thread": { 00:31:10.369 "mask": "0x400", 00:31:10.369 "tpoint_mask": "0x0" 00:31:10.369 }, 00:31:10.369 "tpoint_group_mask": "0x8", 00:31:10.369 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid72719" 00:31:10.369 }' 00:31:10.369 14:44:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:31:10.628 14:44:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:31:10.629 14:44:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:31:10.629 14:44:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:31:10.629 14:44:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:31:10.629 14:44:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:31:10.629 14:44:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:31:10.629 14:44:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:31:10.629 14:44:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:31:10.629 14:44:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:31:10.629 00:31:10.629 real 0m0.240s 00:31:10.629 user 0m0.194s 00:31:10.629 sys 0m0.036s 00:31:10.629 14:44:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:10.629 14:44:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:31:10.629 ************************************ 00:31:10.629 END TEST rpc_trace_cmd_test 00:31:10.629 ************************************ 00:31:10.629 14:44:30 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:31:10.629 14:44:30 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:31:10.629 14:44:30 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:31:10.629 14:44:30 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:10.629 14:44:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:31:10.887 ************************************ 00:31:10.887 START TEST go_rpc 00:31:10.887 ************************************ 00:31:10.887 14:44:30 rpc.go_rpc -- common/autotest_common.sh@1121 -- # go_rpc 00:31:10.887 14:44:30 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:31:10.887 14:44:30 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:31:10.887 14:44:30 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:31:10.887 14:44:30 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:31:10.887 14:44:30 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:31:10.887 14:44:30 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.887 14:44:30 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:10.887 14:44:30 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.888 14:44:30 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:31:10.888 14:44:30 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:31:10.888 14:44:30 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["9344dfa6-b256-4470-b409-be3fefa1ca5f"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"flush":true,"nvme_admin":false,"nvme_io":false,"read":true,"reset":true,"unmap":true,"write":true,"write_zeroes":true},"uuid":"9344dfa6-b256-4470-b409-be3fefa1ca5f","zoned":false}]' 00:31:10.888 14:44:30 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:31:10.888 14:44:30 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:31:10.888 14:44:30 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:31:10.888 14:44:30 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.888 14:44:30 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:10.888 14:44:30 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.888 14:44:30 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:31:10.888 14:44:30 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:31:10.888 14:44:30 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:31:10.888 14:44:30 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:31:10.888 00:31:10.888 real 0m0.230s 00:31:10.888 user 0m0.151s 00:31:10.888 sys 0m0.049s 00:31:10.888 14:44:30 rpc.go_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:10.888 14:44:30 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:10.888 ************************************ 00:31:10.888 END TEST go_rpc 00:31:10.888 ************************************ 00:31:11.147 14:44:30 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:31:11.147 14:44:30 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:31:11.147 14:44:30 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:31:11.147 14:44:30 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:11.147 14:44:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:31:11.147 ************************************ 00:31:11.147 START TEST rpc_daemon_integrity 00:31:11.147 ************************************ 00:31:11.147 14:44:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:31:11.147 14:44:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:31:11.147 14:44:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.147 14:44:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:31:11.147 14:44:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.147 14:44:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:31:11.147 14:44:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:31:11.147 14:44:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:31:11.147 14:44:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:31:11.147 14:44:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.147 14:44:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:31:11.147 14:44:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.147 14:44:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:31:11.147 14:44:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:31:11.147 14:44:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.147 14:44:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:31:11.147 14:44:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.147 14:44:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:31:11.147 { 00:31:11.147 "aliases": [ 00:31:11.147 "502ff466-a59c-43d0-ba0e-355f84c5721f" 00:31:11.147 ], 00:31:11.147 "assigned_rate_limits": { 00:31:11.147 "r_mbytes_per_sec": 0, 00:31:11.147 "rw_ios_per_sec": 0, 00:31:11.147 "rw_mbytes_per_sec": 0, 00:31:11.147 "w_mbytes_per_sec": 0 00:31:11.147 }, 00:31:11.147 "block_size": 512, 00:31:11.147 "claimed": false, 00:31:11.147 "driver_specific": {}, 00:31:11.147 "memory_domains": [ 00:31:11.147 { 00:31:11.147 "dma_device_id": "system", 00:31:11.147 "dma_device_type": 1 00:31:11.147 }, 00:31:11.147 { 00:31:11.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:11.147 "dma_device_type": 2 00:31:11.147 } 00:31:11.147 ], 00:31:11.147 "name": "Malloc3", 00:31:11.147 "num_blocks": 16384, 00:31:11.147 "product_name": "Malloc disk", 00:31:11.147 "supported_io_types": { 00:31:11.147 "abort": true, 00:31:11.147 "compare": false, 00:31:11.147 "compare_and_write": false, 00:31:11.147 "flush": true, 00:31:11.147 "nvme_admin": false, 00:31:11.147 "nvme_io": false, 00:31:11.147 "read": true, 00:31:11.147 "reset": true, 00:31:11.147 "unmap": true, 00:31:11.147 "write": true, 00:31:11.147 "write_zeroes": true 00:31:11.147 }, 00:31:11.147 "uuid": "502ff466-a59c-43d0-ba0e-355f84c5721f", 00:31:11.147 "zoned": false 00:31:11.147 } 00:31:11.147 ]' 00:31:11.147 14:44:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:31:11.147 14:44:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:31:11.147 14:44:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:31:11.147 14:44:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.147 14:44:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:31:11.147 [2024-07-22 14:44:30.704145] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:31:11.147 [2024-07-22 14:44:30.704184] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:11.147 [2024-07-22 14:44:30.704200] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xaea8a0 00:31:11.147 [2024-07-22 14:44:30.704205] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:11.147 [2024-07-22 14:44:30.705656] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:11.147 [2024-07-22 14:44:30.705711] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:31:11.147 Passthru0 00:31:11.147 14:44:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.147 14:44:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:31:11.148 14:44:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.148 14:44:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:31:11.148 14:44:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.148 14:44:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:31:11.148 { 00:31:11.148 "aliases": [ 00:31:11.148 "502ff466-a59c-43d0-ba0e-355f84c5721f" 00:31:11.148 ], 00:31:11.148 "assigned_rate_limits": { 00:31:11.148 "r_mbytes_per_sec": 0, 00:31:11.148 "rw_ios_per_sec": 0, 00:31:11.148 "rw_mbytes_per_sec": 0, 00:31:11.148 "w_mbytes_per_sec": 0 00:31:11.148 }, 00:31:11.148 "block_size": 512, 00:31:11.148 "claim_type": "exclusive_write", 00:31:11.148 "claimed": true, 00:31:11.148 "driver_specific": {}, 00:31:11.148 "memory_domains": [ 00:31:11.148 { 00:31:11.148 "dma_device_id": "system", 00:31:11.148 "dma_device_type": 1 00:31:11.148 }, 00:31:11.148 { 00:31:11.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:11.148 "dma_device_type": 2 00:31:11.148 } 00:31:11.148 ], 00:31:11.148 "name": "Malloc3", 00:31:11.148 "num_blocks": 16384, 00:31:11.148 "product_name": "Malloc disk", 00:31:11.148 "supported_io_types": { 00:31:11.148 "abort": true, 00:31:11.148 "compare": false, 00:31:11.148 "compare_and_write": false, 00:31:11.148 "flush": true, 00:31:11.148 "nvme_admin": false, 00:31:11.148 "nvme_io": false, 00:31:11.148 "read": true, 00:31:11.148 "reset": true, 00:31:11.148 "unmap": true, 00:31:11.148 "write": true, 00:31:11.148 "write_zeroes": true 00:31:11.148 }, 00:31:11.148 "uuid": "502ff466-a59c-43d0-ba0e-355f84c5721f", 00:31:11.148 "zoned": false 00:31:11.148 }, 00:31:11.148 { 00:31:11.148 "aliases": [ 00:31:11.148 "5c12ac71-db78-5ff7-83c2-434aaf839f06" 00:31:11.148 ], 00:31:11.148 "assigned_rate_limits": { 00:31:11.148 "r_mbytes_per_sec": 0, 00:31:11.148 "rw_ios_per_sec": 0, 00:31:11.148 "rw_mbytes_per_sec": 0, 00:31:11.148 "w_mbytes_per_sec": 0 00:31:11.148 }, 00:31:11.148 "block_size": 512, 00:31:11.148 "claimed": false, 00:31:11.148 "driver_specific": { 00:31:11.148 "passthru": { 00:31:11.148 "base_bdev_name": "Malloc3", 00:31:11.148 "name": "Passthru0" 00:31:11.148 } 00:31:11.148 }, 00:31:11.148 "memory_domains": [ 00:31:11.148 { 00:31:11.148 "dma_device_id": "system", 00:31:11.148 "dma_device_type": 1 00:31:11.148 }, 00:31:11.148 { 00:31:11.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:11.148 "dma_device_type": 2 00:31:11.148 } 00:31:11.148 ], 00:31:11.148 "name": "Passthru0", 00:31:11.148 "num_blocks": 16384, 00:31:11.148 "product_name": "passthru", 00:31:11.148 "supported_io_types": { 00:31:11.148 "abort": true, 00:31:11.148 "compare": false, 00:31:11.148 "compare_and_write": false, 00:31:11.148 "flush": true, 00:31:11.148 "nvme_admin": false, 00:31:11.148 "nvme_io": false, 00:31:11.148 "read": true, 00:31:11.148 "reset": true, 00:31:11.148 "unmap": true, 00:31:11.148 "write": true, 00:31:11.148 "write_zeroes": true 00:31:11.148 }, 00:31:11.148 "uuid": "5c12ac71-db78-5ff7-83c2-434aaf839f06", 00:31:11.148 "zoned": false 00:31:11.148 } 00:31:11.148 ]' 00:31:11.148 14:44:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:31:11.406 14:44:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:31:11.406 14:44:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:31:11.406 14:44:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.406 14:44:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:31:11.406 14:44:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.406 14:44:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:31:11.406 14:44:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.406 14:44:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:31:11.406 14:44:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.406 14:44:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:31:11.406 14:44:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.406 14:44:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:31:11.406 14:44:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.406 14:44:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:31:11.406 14:44:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:31:11.406 14:44:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:31:11.406 00:31:11.406 real 0m0.309s 00:31:11.406 user 0m0.186s 00:31:11.406 sys 0m0.048s 00:31:11.406 14:44:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:11.406 14:44:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:31:11.406 ************************************ 00:31:11.406 END TEST rpc_daemon_integrity 00:31:11.406 ************************************ 00:31:11.406 14:44:30 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:11.406 14:44:30 rpc -- rpc/rpc.sh@84 -- # killprocess 72719 00:31:11.406 14:44:30 rpc -- common/autotest_common.sh@946 -- # '[' -z 72719 ']' 00:31:11.406 14:44:30 rpc -- common/autotest_common.sh@950 -- # kill -0 72719 00:31:11.406 14:44:30 rpc -- common/autotest_common.sh@951 -- # uname 00:31:11.406 14:44:30 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:11.406 14:44:30 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72719 00:31:11.407 14:44:30 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:11.407 14:44:30 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:11.407 killing process with pid 72719 00:31:11.407 14:44:30 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72719' 00:31:11.407 14:44:30 rpc -- common/autotest_common.sh@965 -- # kill 72719 00:31:11.407 14:44:30 rpc -- common/autotest_common.sh@970 -- # wait 72719 00:31:11.665 00:31:11.665 real 0m2.956s 00:31:11.665 user 0m3.834s 00:31:11.665 sys 0m0.800s 00:31:11.665 14:44:31 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:11.665 14:44:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:31:11.665 ************************************ 00:31:11.665 END TEST rpc 00:31:11.666 ************************************ 00:31:11.923 14:44:31 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:31:11.924 14:44:31 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:31:11.924 14:44:31 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:11.924 14:44:31 -- common/autotest_common.sh@10 -- # set +x 00:31:11.924 ************************************ 00:31:11.924 START TEST skip_rpc 00:31:11.924 ************************************ 00:31:11.924 14:44:31 skip_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:31:11.924 * Looking for test storage... 00:31:11.924 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:31:11.924 14:44:31 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:31:11.924 14:44:31 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:31:11.924 14:44:31 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:31:11.924 14:44:31 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:31:11.924 14:44:31 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:11.924 14:44:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:11.924 ************************************ 00:31:11.924 START TEST skip_rpc 00:31:11.924 ************************************ 00:31:11.924 14:44:31 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:31:11.924 14:44:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=72980 00:31:11.924 14:44:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:31:11.924 14:44:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:31:11.924 14:44:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:31:11.924 [2024-07-22 14:44:31.514072] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:31:11.924 [2024-07-22 14:44:31.514552] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72980 ] 00:31:12.182 [2024-07-22 14:44:31.654319] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:12.182 [2024-07-22 14:44:31.706095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:17.447 14:44:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:31:17.447 14:44:36 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:31:17.447 14:44:36 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:31:17.447 14:44:36 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:17.447 14:44:36 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:17.447 14:44:36 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:17.447 14:44:36 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:17.447 14:44:36 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:31:17.447 14:44:36 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.447 14:44:36 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:17.447 2024/07/22 14:44:36 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:31:17.447 14:44:36 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:17.447 14:44:36 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:31:17.447 14:44:36 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:17.447 14:44:36 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:17.447 14:44:36 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:17.447 14:44:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:31:17.447 14:44:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 72980 00:31:17.447 14:44:36 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 72980 ']' 00:31:17.447 14:44:36 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 72980 00:31:17.447 14:44:36 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:31:17.447 14:44:36 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:17.447 14:44:36 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72980 00:31:17.447 14:44:36 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:17.447 14:44:36 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:17.447 killing process with pid 72980 00:31:17.447 14:44:36 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72980' 00:31:17.447 14:44:36 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 72980 00:31:17.447 14:44:36 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 72980 00:31:17.447 00:31:17.447 real 0m5.371s 00:31:17.447 user 0m5.046s 00:31:17.447 sys 0m0.247s 00:31:17.447 14:44:36 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:17.447 14:44:36 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:17.447 ************************************ 00:31:17.447 END TEST skip_rpc 00:31:17.447 ************************************ 00:31:17.447 14:44:36 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:31:17.447 14:44:36 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:31:17.447 14:44:36 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:17.447 14:44:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:17.447 ************************************ 00:31:17.447 START TEST skip_rpc_with_json 00:31:17.447 ************************************ 00:31:17.447 14:44:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:31:17.447 14:44:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:31:17.447 14:44:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=73067 00:31:17.447 14:44:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:31:17.447 14:44:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:31:17.447 14:44:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 73067 00:31:17.447 14:44:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 73067 ']' 00:31:17.447 14:44:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:17.447 14:44:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:17.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:17.447 14:44:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:17.447 14:44:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:17.447 14:44:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:31:17.447 [2024-07-22 14:44:36.946162] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:31:17.447 [2024-07-22 14:44:36.946250] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73067 ] 00:31:17.707 [2024-07-22 14:44:37.085715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:17.707 [2024-07-22 14:44:37.148196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:18.300 14:44:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:18.300 14:44:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:31:18.300 14:44:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:31:18.300 14:44:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.300 14:44:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:31:18.300 [2024-07-22 14:44:37.817314] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:31:18.300 2024/07/22 14:44:37 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:31:18.300 request: 00:31:18.300 { 00:31:18.300 "method": "nvmf_get_transports", 00:31:18.300 "params": { 00:31:18.300 "trtype": "tcp" 00:31:18.300 } 00:31:18.300 } 00:31:18.300 Got JSON-RPC error response 00:31:18.300 GoRPCClient: error on JSON-RPC call 00:31:18.300 14:44:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:18.300 14:44:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:31:18.300 14:44:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.300 14:44:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:31:18.300 [2024-07-22 14:44:37.829365] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:18.300 14:44:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.300 14:44:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:31:18.300 14:44:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.300 14:44:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:31:18.560 14:44:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.560 14:44:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:31:18.560 { 00:31:18.560 "subsystems": [ 00:31:18.560 { 00:31:18.560 "subsystem": "keyring", 00:31:18.560 "config": [] 00:31:18.560 }, 00:31:18.560 { 00:31:18.560 "subsystem": "iobuf", 00:31:18.560 "config": [ 00:31:18.560 { 00:31:18.560 "method": "iobuf_set_options", 00:31:18.560 "params": { 00:31:18.560 "large_bufsize": 135168, 00:31:18.560 "large_pool_count": 1024, 00:31:18.560 "small_bufsize": 8192, 00:31:18.560 "small_pool_count": 8192 00:31:18.560 } 00:31:18.560 } 00:31:18.560 ] 00:31:18.560 }, 00:31:18.560 { 00:31:18.560 "subsystem": "sock", 00:31:18.560 "config": [ 00:31:18.560 { 00:31:18.560 "method": "sock_set_default_impl", 00:31:18.560 "params": { 00:31:18.560 "impl_name": "posix" 00:31:18.560 } 00:31:18.560 }, 00:31:18.560 { 00:31:18.560 "method": "sock_impl_set_options", 00:31:18.560 "params": { 00:31:18.560 "enable_ktls": false, 00:31:18.560 "enable_placement_id": 0, 00:31:18.560 "enable_quickack": false, 00:31:18.560 "enable_recv_pipe": true, 00:31:18.560 "enable_zerocopy_send_client": false, 00:31:18.560 "enable_zerocopy_send_server": true, 00:31:18.560 "impl_name": "ssl", 00:31:18.560 "recv_buf_size": 4096, 00:31:18.560 "send_buf_size": 4096, 00:31:18.560 "tls_version": 0, 00:31:18.560 "zerocopy_threshold": 0 00:31:18.560 } 00:31:18.560 }, 00:31:18.560 { 00:31:18.560 "method": "sock_impl_set_options", 00:31:18.560 "params": { 00:31:18.560 "enable_ktls": false, 00:31:18.560 "enable_placement_id": 0, 00:31:18.560 "enable_quickack": false, 00:31:18.560 "enable_recv_pipe": true, 00:31:18.560 "enable_zerocopy_send_client": false, 00:31:18.560 "enable_zerocopy_send_server": true, 00:31:18.560 "impl_name": "posix", 00:31:18.560 "recv_buf_size": 2097152, 00:31:18.560 "send_buf_size": 2097152, 00:31:18.560 "tls_version": 0, 00:31:18.560 "zerocopy_threshold": 0 00:31:18.560 } 00:31:18.560 } 00:31:18.560 ] 00:31:18.560 }, 00:31:18.560 { 00:31:18.560 "subsystem": "vmd", 00:31:18.560 "config": [] 00:31:18.560 }, 00:31:18.560 { 00:31:18.560 "subsystem": "accel", 00:31:18.560 "config": [ 00:31:18.560 { 00:31:18.560 "method": "accel_set_options", 00:31:18.560 "params": { 00:31:18.560 "buf_count": 2048, 00:31:18.560 "large_cache_size": 16, 00:31:18.560 "sequence_count": 2048, 00:31:18.560 "small_cache_size": 128, 00:31:18.560 "task_count": 2048 00:31:18.560 } 00:31:18.560 } 00:31:18.560 ] 00:31:18.560 }, 00:31:18.560 { 00:31:18.560 "subsystem": "bdev", 00:31:18.560 "config": [ 00:31:18.560 { 00:31:18.560 "method": "bdev_set_options", 00:31:18.560 "params": { 00:31:18.560 "bdev_auto_examine": true, 00:31:18.560 "bdev_io_cache_size": 256, 00:31:18.560 "bdev_io_pool_size": 65535, 00:31:18.560 "iobuf_large_cache_size": 16, 00:31:18.560 "iobuf_small_cache_size": 128 00:31:18.560 } 00:31:18.560 }, 00:31:18.560 { 00:31:18.560 "method": "bdev_raid_set_options", 00:31:18.560 "params": { 00:31:18.560 "process_window_size_kb": 1024 00:31:18.560 } 00:31:18.560 }, 00:31:18.560 { 00:31:18.560 "method": "bdev_iscsi_set_options", 00:31:18.560 "params": { 00:31:18.560 "timeout_sec": 30 00:31:18.560 } 00:31:18.560 }, 00:31:18.560 { 00:31:18.560 "method": "bdev_nvme_set_options", 00:31:18.560 "params": { 00:31:18.560 "action_on_timeout": "none", 00:31:18.560 "allow_accel_sequence": false, 00:31:18.560 "arbitration_burst": 0, 00:31:18.560 "bdev_retry_count": 3, 00:31:18.560 "ctrlr_loss_timeout_sec": 0, 00:31:18.560 "delay_cmd_submit": true, 00:31:18.560 "dhchap_dhgroups": [ 00:31:18.560 "null", 00:31:18.560 "ffdhe2048", 00:31:18.560 "ffdhe3072", 00:31:18.560 "ffdhe4096", 00:31:18.560 "ffdhe6144", 00:31:18.560 "ffdhe8192" 00:31:18.560 ], 00:31:18.560 "dhchap_digests": [ 00:31:18.560 "sha256", 00:31:18.560 "sha384", 00:31:18.560 "sha512" 00:31:18.560 ], 00:31:18.560 "disable_auto_failback": false, 00:31:18.560 "fast_io_fail_timeout_sec": 0, 00:31:18.560 "generate_uuids": false, 00:31:18.560 "high_priority_weight": 0, 00:31:18.560 "io_path_stat": false, 00:31:18.560 "io_queue_requests": 0, 00:31:18.560 "keep_alive_timeout_ms": 10000, 00:31:18.560 "low_priority_weight": 0, 00:31:18.560 "medium_priority_weight": 0, 00:31:18.560 "nvme_adminq_poll_period_us": 10000, 00:31:18.560 "nvme_error_stat": false, 00:31:18.560 "nvme_ioq_poll_period_us": 0, 00:31:18.560 "rdma_cm_event_timeout_ms": 0, 00:31:18.560 "rdma_max_cq_size": 0, 00:31:18.560 "rdma_srq_size": 0, 00:31:18.560 "reconnect_delay_sec": 0, 00:31:18.560 "timeout_admin_us": 0, 00:31:18.560 "timeout_us": 0, 00:31:18.560 "transport_ack_timeout": 0, 00:31:18.560 "transport_retry_count": 4, 00:31:18.560 "transport_tos": 0 00:31:18.560 } 00:31:18.560 }, 00:31:18.560 { 00:31:18.560 "method": "bdev_nvme_set_hotplug", 00:31:18.560 "params": { 00:31:18.560 "enable": false, 00:31:18.560 "period_us": 100000 00:31:18.560 } 00:31:18.560 }, 00:31:18.560 { 00:31:18.560 "method": "bdev_wait_for_examine" 00:31:18.560 } 00:31:18.560 ] 00:31:18.560 }, 00:31:18.560 { 00:31:18.560 "subsystem": "scsi", 00:31:18.560 "config": null 00:31:18.560 }, 00:31:18.560 { 00:31:18.560 "subsystem": "scheduler", 00:31:18.560 "config": [ 00:31:18.560 { 00:31:18.560 "method": "framework_set_scheduler", 00:31:18.560 "params": { 00:31:18.560 "name": "static" 00:31:18.560 } 00:31:18.560 } 00:31:18.560 ] 00:31:18.560 }, 00:31:18.560 { 00:31:18.560 "subsystem": "vhost_scsi", 00:31:18.560 "config": [] 00:31:18.560 }, 00:31:18.560 { 00:31:18.560 "subsystem": "vhost_blk", 00:31:18.560 "config": [] 00:31:18.560 }, 00:31:18.560 { 00:31:18.560 "subsystem": "ublk", 00:31:18.560 "config": [] 00:31:18.560 }, 00:31:18.560 { 00:31:18.560 "subsystem": "nbd", 00:31:18.560 "config": [] 00:31:18.560 }, 00:31:18.560 { 00:31:18.560 "subsystem": "nvmf", 00:31:18.560 "config": [ 00:31:18.560 { 00:31:18.560 "method": "nvmf_set_config", 00:31:18.560 "params": { 00:31:18.560 "admin_cmd_passthru": { 00:31:18.560 "identify_ctrlr": false 00:31:18.560 }, 00:31:18.560 "discovery_filter": "match_any" 00:31:18.560 } 00:31:18.560 }, 00:31:18.560 { 00:31:18.560 "method": "nvmf_set_max_subsystems", 00:31:18.560 "params": { 00:31:18.560 "max_subsystems": 1024 00:31:18.560 } 00:31:18.560 }, 00:31:18.560 { 00:31:18.560 "method": "nvmf_set_crdt", 00:31:18.560 "params": { 00:31:18.560 "crdt1": 0, 00:31:18.560 "crdt2": 0, 00:31:18.560 "crdt3": 0 00:31:18.560 } 00:31:18.560 }, 00:31:18.560 { 00:31:18.560 "method": "nvmf_create_transport", 00:31:18.560 "params": { 00:31:18.560 "abort_timeout_sec": 1, 00:31:18.560 "ack_timeout": 0, 00:31:18.560 "buf_cache_size": 4294967295, 00:31:18.560 "c2h_success": true, 00:31:18.560 "data_wr_pool_size": 0, 00:31:18.560 "dif_insert_or_strip": false, 00:31:18.561 "in_capsule_data_size": 4096, 00:31:18.561 "io_unit_size": 131072, 00:31:18.561 "max_aq_depth": 128, 00:31:18.561 "max_io_qpairs_per_ctrlr": 127, 00:31:18.561 "max_io_size": 131072, 00:31:18.561 "max_queue_depth": 128, 00:31:18.561 "num_shared_buffers": 511, 00:31:18.561 "sock_priority": 0, 00:31:18.561 "trtype": "TCP", 00:31:18.561 "zcopy": false 00:31:18.561 } 00:31:18.561 } 00:31:18.561 ] 00:31:18.561 }, 00:31:18.561 { 00:31:18.561 "subsystem": "iscsi", 00:31:18.561 "config": [ 00:31:18.561 { 00:31:18.561 "method": "iscsi_set_options", 00:31:18.561 "params": { 00:31:18.561 "allow_duplicated_isid": false, 00:31:18.561 "chap_group": 0, 00:31:18.561 "data_out_pool_size": 2048, 00:31:18.561 "default_time2retain": 20, 00:31:18.561 "default_time2wait": 2, 00:31:18.561 "disable_chap": false, 00:31:18.561 "error_recovery_level": 0, 00:31:18.561 "first_burst_length": 8192, 00:31:18.561 "immediate_data": true, 00:31:18.561 "immediate_data_pool_size": 16384, 00:31:18.561 "max_connections_per_session": 2, 00:31:18.561 "max_large_datain_per_connection": 64, 00:31:18.561 "max_queue_depth": 64, 00:31:18.561 "max_r2t_per_connection": 4, 00:31:18.561 "max_sessions": 128, 00:31:18.561 "mutual_chap": false, 00:31:18.561 "node_base": "iqn.2016-06.io.spdk", 00:31:18.561 "nop_in_interval": 30, 00:31:18.561 "nop_timeout": 60, 00:31:18.561 "pdu_pool_size": 36864, 00:31:18.561 "require_chap": false 00:31:18.561 } 00:31:18.561 } 00:31:18.561 ] 00:31:18.561 } 00:31:18.561 ] 00:31:18.561 } 00:31:18.561 14:44:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:31:18.561 14:44:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 73067 00:31:18.561 14:44:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 73067 ']' 00:31:18.561 14:44:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 73067 00:31:18.561 14:44:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:31:18.561 14:44:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:18.561 14:44:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73067 00:31:18.561 14:44:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:18.561 14:44:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:18.561 killing process with pid 73067 00:31:18.561 14:44:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73067' 00:31:18.561 14:44:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 73067 00:31:18.561 14:44:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 73067 00:31:18.820 14:44:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:31:18.820 14:44:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=73106 00:31:18.820 14:44:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:31:24.114 14:44:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 73106 00:31:24.114 14:44:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 73106 ']' 00:31:24.114 14:44:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 73106 00:31:24.114 14:44:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:31:24.114 14:44:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:24.114 14:44:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73106 00:31:24.114 14:44:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:24.114 14:44:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:24.114 killing process with pid 73106 00:31:24.114 14:44:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73106' 00:31:24.114 14:44:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 73106 00:31:24.114 14:44:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 73106 00:31:24.114 14:44:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:31:24.114 14:44:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:31:24.114 00:31:24.114 real 0m6.800s 00:31:24.114 user 0m6.554s 00:31:24.114 sys 0m0.536s 00:31:24.114 14:44:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:24.114 14:44:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:31:24.114 ************************************ 00:31:24.114 END TEST skip_rpc_with_json 00:31:24.114 ************************************ 00:31:24.114 14:44:43 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:31:24.114 14:44:43 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:31:24.114 14:44:43 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:24.114 14:44:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:24.374 ************************************ 00:31:24.374 START TEST skip_rpc_with_delay 00:31:24.374 ************************************ 00:31:24.374 14:44:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:31:24.374 14:44:43 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:31:24.374 14:44:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:31:24.374 14:44:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:31:24.374 14:44:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:24.374 14:44:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:24.374 14:44:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:24.374 14:44:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:24.374 14:44:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:24.374 14:44:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:24.374 14:44:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:24.374 14:44:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:31:24.374 14:44:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:31:24.374 [2024-07-22 14:44:43.811467] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:31:24.374 [2024-07-22 14:44:43.811565] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:31:24.374 14:44:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:31:24.374 14:44:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:24.374 14:44:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:24.374 14:44:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:24.374 00:31:24.374 real 0m0.074s 00:31:24.374 user 0m0.041s 00:31:24.374 sys 0m0.032s 00:31:24.374 14:44:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:24.374 14:44:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:31:24.374 ************************************ 00:31:24.374 END TEST skip_rpc_with_delay 00:31:24.374 ************************************ 00:31:24.374 14:44:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:31:24.374 14:44:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:31:24.374 14:44:43 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:31:24.375 14:44:43 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:31:24.375 14:44:43 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:24.375 14:44:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:24.375 ************************************ 00:31:24.375 START TEST exit_on_failed_rpc_init 00:31:24.375 ************************************ 00:31:24.375 14:44:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:31:24.375 14:44:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=73216 00:31:24.375 14:44:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:31:24.375 14:44:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 73216 00:31:24.375 14:44:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 73216 ']' 00:31:24.375 14:44:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:24.375 14:44:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:24.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:24.375 14:44:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:24.375 14:44:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:24.375 14:44:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:31:24.375 [2024-07-22 14:44:43.948416] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:31:24.375 [2024-07-22 14:44:43.948489] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73216 ] 00:31:24.638 [2024-07-22 14:44:44.086084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:24.638 [2024-07-22 14:44:44.136043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:25.218 14:44:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:25.218 14:44:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:31:25.218 14:44:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:31:25.218 14:44:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:31:25.218 14:44:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:31:25.218 14:44:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:31:25.218 14:44:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:25.218 14:44:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:25.218 14:44:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:25.218 14:44:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:25.218 14:44:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:25.218 14:44:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:25.218 14:44:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:25.218 14:44:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:31:25.218 14:44:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:31:25.477 [2024-07-22 14:44:44.860950] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:31:25.477 [2024-07-22 14:44:44.861021] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73246 ] 00:31:25.477 [2024-07-22 14:44:44.988235] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:25.477 [2024-07-22 14:44:45.035754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:25.477 [2024-07-22 14:44:45.035825] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:25.477 [2024-07-22 14:44:45.035832] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:25.477 [2024-07-22 14:44:45.035837] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:25.735 14:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:31:25.735 14:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:25.735 14:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:31:25.735 14:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:31:25.735 14:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:31:25.735 14:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:25.735 14:44:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:31:25.735 14:44:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 73216 00:31:25.735 14:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 73216 ']' 00:31:25.735 14:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 73216 00:31:25.735 14:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:31:25.735 14:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:25.735 14:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73216 00:31:25.735 14:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:25.735 14:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:25.735 killing process with pid 73216 00:31:25.735 14:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73216' 00:31:25.735 14:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 73216 00:31:25.735 14:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 73216 00:31:25.995 00:31:25.995 real 0m1.580s 00:31:25.995 user 0m1.739s 00:31:25.995 sys 0m0.372s 00:31:25.995 14:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:25.995 14:44:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:31:25.995 ************************************ 00:31:25.995 END TEST exit_on_failed_rpc_init 00:31:25.995 ************************************ 00:31:25.995 14:44:45 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:31:25.995 00:31:25.995 real 0m14.205s 00:31:25.995 user 0m13.528s 00:31:25.995 sys 0m1.425s 00:31:25.995 14:44:45 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:25.995 14:44:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:25.995 ************************************ 00:31:25.995 END TEST skip_rpc 00:31:25.995 ************************************ 00:31:25.995 14:44:45 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:31:25.995 14:44:45 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:31:25.995 14:44:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:25.995 14:44:45 -- common/autotest_common.sh@10 -- # set +x 00:31:25.995 ************************************ 00:31:25.995 START TEST rpc_client 00:31:25.995 ************************************ 00:31:25.995 14:44:45 rpc_client -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:31:26.254 * Looking for test storage... 00:31:26.254 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:31:26.254 14:44:45 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:31:26.254 OK 00:31:26.254 14:44:45 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:31:26.254 00:31:26.254 real 0m0.144s 00:31:26.254 user 0m0.066s 00:31:26.254 sys 0m0.085s 00:31:26.254 14:44:45 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:26.254 14:44:45 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:31:26.254 ************************************ 00:31:26.254 END TEST rpc_client 00:31:26.254 ************************************ 00:31:26.254 14:44:45 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:31:26.254 14:44:45 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:31:26.254 14:44:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:26.254 14:44:45 -- common/autotest_common.sh@10 -- # set +x 00:31:26.254 ************************************ 00:31:26.254 START TEST json_config 00:31:26.254 ************************************ 00:31:26.254 14:44:45 json_config -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:31:26.254 14:44:45 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:26.254 14:44:45 json_config -- nvmf/common.sh@7 -- # uname -s 00:31:26.254 14:44:45 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:26.254 14:44:45 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:26.254 14:44:45 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:26.254 14:44:45 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:26.254 14:44:45 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:26.254 14:44:45 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:26.254 14:44:45 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:26.254 14:44:45 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:26.254 14:44:45 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:26.254 14:44:45 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:26.255 14:44:45 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:31:26.255 14:44:45 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:31:26.255 14:44:45 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:26.255 14:44:45 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:26.255 14:44:45 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:31:26.255 14:44:45 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:26.255 14:44:45 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:26.255 14:44:45 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:26.255 14:44:45 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:26.255 14:44:45 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:26.255 14:44:45 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:26.255 14:44:45 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:26.255 14:44:45 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:26.255 14:44:45 json_config -- paths/export.sh@5 -- # export PATH 00:31:26.515 14:44:45 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:26.515 14:44:45 json_config -- nvmf/common.sh@47 -- # : 0 00:31:26.515 14:44:45 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:26.515 14:44:45 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:26.515 14:44:45 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:26.515 14:44:45 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:26.515 14:44:45 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:26.515 14:44:45 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:26.515 14:44:45 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:26.515 14:44:45 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:26.515 14:44:45 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:31:26.515 14:44:45 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:31:26.515 14:44:45 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:31:26.515 14:44:45 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:31:26.515 14:44:45 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:31:26.515 14:44:45 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:31:26.515 14:44:45 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:31:26.515 14:44:45 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:31:26.515 14:44:45 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:31:26.515 14:44:45 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:31:26.515 14:44:45 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:31:26.515 14:44:45 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:31:26.515 14:44:45 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:31:26.515 14:44:45 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:31:26.515 14:44:45 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:31:26.515 INFO: JSON configuration test init 00:31:26.515 14:44:45 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:31:26.515 14:44:45 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:31:26.515 14:44:45 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:31:26.515 14:44:45 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:26.515 14:44:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:31:26.515 14:44:45 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:31:26.515 14:44:45 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:26.515 14:44:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:31:26.515 14:44:45 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:31:26.515 14:44:45 json_config -- json_config/common.sh@9 -- # local app=target 00:31:26.515 14:44:45 json_config -- json_config/common.sh@10 -- # shift 00:31:26.515 14:44:45 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:31:26.515 14:44:45 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:31:26.515 14:44:45 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:31:26.515 14:44:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:31:26.515 14:44:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:31:26.515 14:44:45 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=73364 00:31:26.515 Waiting for target to run... 00:31:26.515 14:44:45 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:31:26.515 14:44:45 json_config -- json_config/common.sh@25 -- # waitforlisten 73364 /var/tmp/spdk_tgt.sock 00:31:26.515 14:44:45 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:31:26.515 14:44:45 json_config -- common/autotest_common.sh@827 -- # '[' -z 73364 ']' 00:31:26.515 14:44:45 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:31:26.515 14:44:45 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:26.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:31:26.515 14:44:45 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:31:26.515 14:44:45 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:26.515 14:44:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:31:26.515 [2024-07-22 14:44:45.968321] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:31:26.515 [2024-07-22 14:44:45.968392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73364 ] 00:31:26.775 [2024-07-22 14:44:46.311244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:26.775 [2024-07-22 14:44:46.344128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:27.344 14:44:46 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:27.344 14:44:46 json_config -- common/autotest_common.sh@860 -- # return 0 00:31:27.344 00:31:27.344 14:44:46 json_config -- json_config/common.sh@26 -- # echo '' 00:31:27.344 14:44:46 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:31:27.344 14:44:46 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:31:27.344 14:44:46 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:27.344 14:44:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:31:27.344 14:44:46 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:31:27.344 14:44:46 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:31:27.344 14:44:46 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:27.344 14:44:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:31:27.344 14:44:46 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:31:27.344 14:44:46 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:31:27.344 14:44:46 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:31:27.929 14:44:47 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:31:27.929 14:44:47 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:31:27.929 14:44:47 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:27.929 14:44:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:31:27.929 14:44:47 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:31:27.929 14:44:47 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:31:27.929 14:44:47 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:31:27.929 14:44:47 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:31:27.929 14:44:47 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:31:27.929 14:44:47 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:31:28.188 14:44:47 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:31:28.188 14:44:47 json_config -- json_config/json_config.sh@48 -- # local get_types 00:31:28.188 14:44:47 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:31:28.189 14:44:47 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:31:28.189 14:44:47 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:28.189 14:44:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:31:28.189 14:44:47 json_config -- json_config/json_config.sh@55 -- # return 0 00:31:28.189 14:44:47 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:31:28.189 14:44:47 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:31:28.189 14:44:47 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:31:28.189 14:44:47 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:31:28.189 14:44:47 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:31:28.189 14:44:47 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:31:28.189 14:44:47 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:28.189 14:44:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:31:28.189 14:44:47 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:31:28.189 14:44:47 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:31:28.189 14:44:47 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:31:28.189 14:44:47 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:31:28.189 14:44:47 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:31:28.448 MallocForNvmf0 00:31:28.448 14:44:47 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:31:28.448 14:44:47 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:31:28.448 MallocForNvmf1 00:31:28.448 14:44:48 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:31:28.448 14:44:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:31:28.707 [2024-07-22 14:44:48.296998] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:28.707 14:44:48 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:28.707 14:44:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:28.967 14:44:48 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:31:28.967 14:44:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:31:29.226 14:44:48 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:31:29.226 14:44:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:31:29.486 14:44:48 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:31:29.486 14:44:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:31:29.744 [2024-07-22 14:44:49.147784] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:29.744 14:44:49 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:31:29.744 14:44:49 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:29.744 14:44:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:31:29.744 14:44:49 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:31:29.744 14:44:49 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:29.744 14:44:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:31:29.744 14:44:49 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:31:29.744 14:44:49 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:31:29.744 14:44:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:31:30.054 MallocBdevForConfigChangeCheck 00:31:30.054 14:44:49 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:31:30.054 14:44:49 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:30.054 14:44:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:31:30.054 14:44:49 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:31:30.054 14:44:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:31:30.312 INFO: shutting down applications... 00:31:30.312 14:44:49 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:31:30.312 14:44:49 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:31:30.312 14:44:49 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:31:30.312 14:44:49 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:31:30.312 14:44:49 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:31:30.571 Calling clear_iscsi_subsystem 00:31:30.571 Calling clear_nvmf_subsystem 00:31:30.571 Calling clear_nbd_subsystem 00:31:30.571 Calling clear_ublk_subsystem 00:31:30.571 Calling clear_vhost_blk_subsystem 00:31:30.571 Calling clear_vhost_scsi_subsystem 00:31:30.571 Calling clear_bdev_subsystem 00:31:30.831 14:44:50 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:31:30.831 14:44:50 json_config -- json_config/json_config.sh@343 -- # count=100 00:31:30.831 14:44:50 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:31:30.831 14:44:50 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:31:30.831 14:44:50 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:31:30.831 14:44:50 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:31:31.091 14:44:50 json_config -- json_config/json_config.sh@345 -- # break 00:31:31.091 14:44:50 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:31:31.091 14:44:50 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:31:31.091 14:44:50 json_config -- json_config/common.sh@31 -- # local app=target 00:31:31.091 14:44:50 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:31:31.091 14:44:50 json_config -- json_config/common.sh@35 -- # [[ -n 73364 ]] 00:31:31.091 14:44:50 json_config -- json_config/common.sh@38 -- # kill -SIGINT 73364 00:31:31.091 14:44:50 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:31:31.091 14:44:50 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:31:31.091 14:44:50 json_config -- json_config/common.sh@41 -- # kill -0 73364 00:31:31.091 14:44:50 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:31:31.661 14:44:51 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:31:31.661 14:44:51 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:31:31.661 14:44:51 json_config -- json_config/common.sh@41 -- # kill -0 73364 00:31:31.661 14:44:51 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:31:31.661 14:44:51 json_config -- json_config/common.sh@43 -- # break 00:31:31.661 14:44:51 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:31:31.661 SPDK target shutdown done 00:31:31.661 14:44:51 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:31:31.661 INFO: relaunching applications... 00:31:31.661 14:44:51 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:31:31.661 14:44:51 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:31:31.661 14:44:51 json_config -- json_config/common.sh@9 -- # local app=target 00:31:31.661 14:44:51 json_config -- json_config/common.sh@10 -- # shift 00:31:31.661 14:44:51 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:31:31.661 14:44:51 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:31:31.661 14:44:51 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:31:31.661 14:44:51 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:31:31.661 14:44:51 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:31:31.661 14:44:51 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=73633 00:31:31.661 Waiting for target to run... 00:31:31.661 14:44:51 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:31:31.661 14:44:51 json_config -- json_config/common.sh@25 -- # waitforlisten 73633 /var/tmp/spdk_tgt.sock 00:31:31.661 14:44:51 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:31:31.661 14:44:51 json_config -- common/autotest_common.sh@827 -- # '[' -z 73633 ']' 00:31:31.661 14:44:51 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:31:31.661 14:44:51 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:31.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:31:31.661 14:44:51 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:31:31.661 14:44:51 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:31.661 14:44:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:31:31.661 [2024-07-22 14:44:51.206248] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:31:31.661 [2024-07-22 14:44:51.206331] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73633 ] 00:31:32.230 [2024-07-22 14:44:51.562263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:32.230 [2024-07-22 14:44:51.596010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:32.490 [2024-07-22 14:44:51.891736] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:32.490 [2024-07-22 14:44:51.923923] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:32.749 14:44:52 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:32.749 14:44:52 json_config -- common/autotest_common.sh@860 -- # return 0 00:31:32.749 00:31:32.749 14:44:52 json_config -- json_config/common.sh@26 -- # echo '' 00:31:32.749 14:44:52 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:31:32.749 INFO: Checking if target configuration is the same... 00:31:32.749 14:44:52 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:31:32.749 14:44:52 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:31:32.749 14:44:52 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:31:32.749 14:44:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:31:32.749 + '[' 2 -ne 2 ']' 00:31:32.749 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:31:32.749 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:31:32.749 + rootdir=/home/vagrant/spdk_repo/spdk 00:31:32.749 +++ basename /dev/fd/62 00:31:32.749 ++ mktemp /tmp/62.XXX 00:31:32.749 + tmp_file_1=/tmp/62.33I 00:31:32.749 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:31:32.749 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:31:32.749 + tmp_file_2=/tmp/spdk_tgt_config.json.7hf 00:31:32.749 + ret=0 00:31:32.749 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:31:33.014 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:31:33.014 + diff -u /tmp/62.33I /tmp/spdk_tgt_config.json.7hf 00:31:33.014 INFO: JSON config files are the same 00:31:33.014 + echo 'INFO: JSON config files are the same' 00:31:33.014 + rm /tmp/62.33I /tmp/spdk_tgt_config.json.7hf 00:31:33.014 + exit 0 00:31:33.014 14:44:52 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:31:33.014 14:44:52 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:31:33.014 INFO: changing configuration and checking if this can be detected... 00:31:33.014 14:44:52 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:31:33.014 14:44:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:31:33.283 14:44:52 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:31:33.283 14:44:52 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:31:33.283 14:44:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:31:33.283 + '[' 2 -ne 2 ']' 00:31:33.283 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:31:33.283 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:31:33.283 + rootdir=/home/vagrant/spdk_repo/spdk 00:31:33.283 +++ basename /dev/fd/62 00:31:33.283 ++ mktemp /tmp/62.XXX 00:31:33.283 + tmp_file_1=/tmp/62.Fv6 00:31:33.283 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:31:33.283 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:31:33.283 + tmp_file_2=/tmp/spdk_tgt_config.json.Jf1 00:31:33.283 + ret=0 00:31:33.283 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:31:33.541 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:31:33.801 + diff -u /tmp/62.Fv6 /tmp/spdk_tgt_config.json.Jf1 00:31:33.801 + ret=1 00:31:33.801 + echo '=== Start of file: /tmp/62.Fv6 ===' 00:31:33.801 + cat /tmp/62.Fv6 00:31:33.801 + echo '=== End of file: /tmp/62.Fv6 ===' 00:31:33.801 + echo '' 00:31:33.801 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Jf1 ===' 00:31:33.801 + cat /tmp/spdk_tgt_config.json.Jf1 00:31:33.801 + echo '=== End of file: /tmp/spdk_tgt_config.json.Jf1 ===' 00:31:33.801 + echo '' 00:31:33.801 + rm /tmp/62.Fv6 /tmp/spdk_tgt_config.json.Jf1 00:31:33.801 + exit 1 00:31:33.801 INFO: configuration change detected. 00:31:33.801 14:44:53 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:31:33.801 14:44:53 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:31:33.801 14:44:53 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:31:33.801 14:44:53 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:33.801 14:44:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:31:33.801 14:44:53 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:31:33.801 14:44:53 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:31:33.801 14:44:53 json_config -- json_config/json_config.sh@317 -- # [[ -n 73633 ]] 00:31:33.801 14:44:53 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:31:33.801 14:44:53 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:31:33.801 14:44:53 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:33.801 14:44:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:31:33.801 14:44:53 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:31:33.801 14:44:53 json_config -- json_config/json_config.sh@193 -- # uname -s 00:31:33.801 14:44:53 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:31:33.801 14:44:53 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:31:33.801 14:44:53 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:31:33.801 14:44:53 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:31:33.801 14:44:53 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:33.801 14:44:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:31:33.801 14:44:53 json_config -- json_config/json_config.sh@323 -- # killprocess 73633 00:31:33.801 14:44:53 json_config -- common/autotest_common.sh@946 -- # '[' -z 73633 ']' 00:31:33.801 14:44:53 json_config -- common/autotest_common.sh@950 -- # kill -0 73633 00:31:33.801 14:44:53 json_config -- common/autotest_common.sh@951 -- # uname 00:31:33.801 14:44:53 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:33.801 14:44:53 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73633 00:31:33.801 14:44:53 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:33.801 killing process with pid 73633 00:31:33.801 14:44:53 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:33.801 14:44:53 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73633' 00:31:33.801 14:44:53 json_config -- common/autotest_common.sh@965 -- # kill 73633 00:31:33.801 14:44:53 json_config -- common/autotest_common.sh@970 -- # wait 73633 00:31:34.061 14:44:53 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:31:34.061 14:44:53 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:31:34.061 14:44:53 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:34.061 14:44:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:31:34.061 14:44:53 json_config -- json_config/json_config.sh@328 -- # return 0 00:31:34.061 INFO: Success 00:31:34.061 14:44:53 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:31:34.061 ************************************ 00:31:34.061 END TEST json_config 00:31:34.061 ************************************ 00:31:34.061 00:31:34.061 real 0m7.805s 00:31:34.061 user 0m10.956s 00:31:34.061 sys 0m1.805s 00:31:34.061 14:44:53 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:34.061 14:44:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:31:34.061 14:44:53 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:31:34.061 14:44:53 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:31:34.061 14:44:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:34.061 14:44:53 -- common/autotest_common.sh@10 -- # set +x 00:31:34.061 ************************************ 00:31:34.061 START TEST json_config_extra_key 00:31:34.061 ************************************ 00:31:34.061 14:44:53 json_config_extra_key -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:31:34.321 14:44:53 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:34.321 14:44:53 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:31:34.321 14:44:53 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:34.321 14:44:53 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:34.321 14:44:53 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:34.321 14:44:53 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:34.321 14:44:53 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:34.321 14:44:53 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:34.321 14:44:53 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:34.321 14:44:53 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:34.321 14:44:53 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:34.321 14:44:53 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:34.321 14:44:53 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:31:34.321 14:44:53 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:31:34.321 14:44:53 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:34.321 14:44:53 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:34.321 14:44:53 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:31:34.321 14:44:53 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:34.321 14:44:53 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:34.321 14:44:53 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:34.321 14:44:53 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:34.321 14:44:53 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:34.321 14:44:53 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:34.321 14:44:53 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:34.321 14:44:53 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:34.321 14:44:53 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:31:34.321 14:44:53 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:34.321 14:44:53 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:31:34.321 14:44:53 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:34.321 14:44:53 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:34.321 14:44:53 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:34.321 14:44:53 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:34.321 14:44:53 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:34.321 14:44:53 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:34.321 14:44:53 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:34.321 14:44:53 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:34.321 14:44:53 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:31:34.321 14:44:53 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:31:34.321 14:44:53 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:31:34.321 14:44:53 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:31:34.321 14:44:53 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:31:34.321 14:44:53 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:31:34.321 14:44:53 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:31:34.321 14:44:53 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:31:34.321 14:44:53 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:31:34.321 14:44:53 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:31:34.321 INFO: launching applications... 00:31:34.321 14:44:53 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:31:34.321 14:44:53 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:31:34.321 14:44:53 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:31:34.321 14:44:53 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:31:34.321 14:44:53 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:31:34.321 14:44:53 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:31:34.321 14:44:53 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:31:34.321 14:44:53 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:31:34.321 14:44:53 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:31:34.321 Waiting for target to run... 00:31:34.321 14:44:53 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=73803 00:31:34.321 14:44:53 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:31:34.321 14:44:53 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 73803 /var/tmp/spdk_tgt.sock 00:31:34.321 14:44:53 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 73803 ']' 00:31:34.321 14:44:53 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:31:34.321 14:44:53 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:31:34.321 14:44:53 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:34.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:31:34.321 14:44:53 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:31:34.321 14:44:53 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:34.321 14:44:53 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:31:34.321 [2024-07-22 14:44:53.831427] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:31:34.321 [2024-07-22 14:44:53.831512] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73803 ] 00:31:34.581 [2024-07-22 14:44:54.192748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:34.839 [2024-07-22 14:44:54.227383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:35.408 00:31:35.408 INFO: shutting down applications... 00:31:35.408 14:44:54 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:35.408 14:44:54 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:31:35.408 14:44:54 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:31:35.408 14:44:54 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:31:35.408 14:44:54 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:31:35.408 14:44:54 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:31:35.408 14:44:54 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:31:35.408 14:44:54 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 73803 ]] 00:31:35.408 14:44:54 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 73803 00:31:35.408 14:44:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:31:35.408 14:44:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:31:35.408 14:44:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 73803 00:31:35.408 14:44:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:31:35.667 14:44:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:31:35.667 14:44:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:31:35.667 14:44:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 73803 00:31:35.667 14:44:55 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:31:35.667 14:44:55 json_config_extra_key -- json_config/common.sh@43 -- # break 00:31:35.667 14:44:55 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:31:35.667 14:44:55 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:31:35.667 SPDK target shutdown done 00:31:35.667 14:44:55 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:31:35.667 Success 00:31:35.667 00:31:35.667 real 0m1.594s 00:31:35.667 user 0m1.367s 00:31:35.667 sys 0m0.397s 00:31:35.667 14:44:55 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:35.667 14:44:55 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:31:35.667 ************************************ 00:31:35.667 END TEST json_config_extra_key 00:31:35.667 ************************************ 00:31:35.667 14:44:55 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:31:35.667 14:44:55 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:31:35.667 14:44:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:35.667 14:44:55 -- common/autotest_common.sh@10 -- # set +x 00:31:35.927 ************************************ 00:31:35.927 START TEST alias_rpc 00:31:35.927 ************************************ 00:31:35.927 14:44:55 alias_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:31:35.927 * Looking for test storage... 00:31:35.927 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:31:35.927 14:44:55 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:31:35.927 14:44:55 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=73883 00:31:35.927 14:44:55 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:35.927 14:44:55 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 73883 00:31:35.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:35.927 14:44:55 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 73883 ']' 00:31:35.927 14:44:55 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:35.927 14:44:55 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:35.927 14:44:55 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:35.927 14:44:55 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:35.927 14:44:55 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:35.927 [2024-07-22 14:44:55.462636] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:31:35.927 [2024-07-22 14:44:55.462730] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73883 ] 00:31:36.187 [2024-07-22 14:44:55.590170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:36.187 [2024-07-22 14:44:55.654452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:36.754 14:44:56 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:36.754 14:44:56 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:31:36.754 14:44:56 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:31:37.012 14:44:56 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 73883 00:31:37.012 14:44:56 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 73883 ']' 00:31:37.012 14:44:56 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 73883 00:31:37.012 14:44:56 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:31:37.012 14:44:56 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:37.012 14:44:56 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73883 00:31:37.012 14:44:56 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:37.012 14:44:56 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:37.012 14:44:56 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73883' 00:31:37.012 killing process with pid 73883 00:31:37.012 14:44:56 alias_rpc -- common/autotest_common.sh@965 -- # kill 73883 00:31:37.012 14:44:56 alias_rpc -- common/autotest_common.sh@970 -- # wait 73883 00:31:37.271 00:31:37.271 real 0m1.593s 00:31:37.271 user 0m1.717s 00:31:37.271 sys 0m0.418s 00:31:37.271 14:44:56 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:37.271 14:44:56 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:37.271 ************************************ 00:31:37.271 END TEST alias_rpc 00:31:37.271 ************************************ 00:31:37.529 14:44:56 -- spdk/autotest.sh@176 -- # [[ 1 -eq 0 ]] 00:31:37.529 14:44:56 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:31:37.529 14:44:56 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:31:37.529 14:44:56 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:37.529 14:44:56 -- common/autotest_common.sh@10 -- # set +x 00:31:37.529 ************************************ 00:31:37.529 START TEST dpdk_mem_utility 00:31:37.529 ************************************ 00:31:37.529 14:44:56 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:31:37.529 * Looking for test storage... 00:31:37.529 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:31:37.529 14:44:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:31:37.529 14:44:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=73974 00:31:37.529 14:44:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:37.529 14:44:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 73974 00:31:37.529 14:44:57 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 73974 ']' 00:31:37.529 14:44:57 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:37.529 14:44:57 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:37.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:37.529 14:44:57 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:37.529 14:44:57 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:37.529 14:44:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:31:37.529 [2024-07-22 14:44:57.132092] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:31:37.529 [2024-07-22 14:44:57.132172] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73974 ] 00:31:37.788 [2024-07-22 14:44:57.270444] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:37.788 [2024-07-22 14:44:57.320642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:38.356 14:44:57 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:38.356 14:44:57 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:31:38.356 14:44:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:31:38.356 14:44:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:31:38.356 14:44:57 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:38.356 14:44:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:31:38.618 { 00:31:38.618 "filename": "/tmp/spdk_mem_dump.txt" 00:31:38.618 } 00:31:38.618 14:44:57 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:38.618 14:44:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:31:38.618 DPDK memory size 814.000000 MiB in 1 heap(s) 00:31:38.618 1 heaps totaling size 814.000000 MiB 00:31:38.618 size: 814.000000 MiB heap id: 0 00:31:38.618 end heaps---------- 00:31:38.618 8 mempools totaling size 598.116089 MiB 00:31:38.618 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:31:38.618 size: 158.602051 MiB name: PDU_data_out_Pool 00:31:38.618 size: 84.521057 MiB name: bdev_io_73974 00:31:38.618 size: 51.011292 MiB name: evtpool_73974 00:31:38.618 size: 50.003479 MiB name: msgpool_73974 00:31:38.618 size: 21.763794 MiB name: PDU_Pool 00:31:38.618 size: 19.513306 MiB name: SCSI_TASK_Pool 00:31:38.618 size: 0.026123 MiB name: Session_Pool 00:31:38.618 end mempools------- 00:31:38.618 6 memzones totaling size 4.142822 MiB 00:31:38.618 size: 1.000366 MiB name: RG_ring_0_73974 00:31:38.618 size: 1.000366 MiB name: RG_ring_1_73974 00:31:38.618 size: 1.000366 MiB name: RG_ring_4_73974 00:31:38.618 size: 1.000366 MiB name: RG_ring_5_73974 00:31:38.618 size: 0.125366 MiB name: RG_ring_2_73974 00:31:38.618 size: 0.015991 MiB name: RG_ring_3_73974 00:31:38.618 end memzones------- 00:31:38.618 14:44:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:31:38.618 heap id: 0 total size: 814.000000 MiB number of busy elements: 224 number of free elements: 15 00:31:38.618 list of free elements. size: 12.485840 MiB 00:31:38.618 element at address: 0x200000400000 with size: 1.999512 MiB 00:31:38.618 element at address: 0x200018e00000 with size: 0.999878 MiB 00:31:38.618 element at address: 0x200019000000 with size: 0.999878 MiB 00:31:38.618 element at address: 0x200003e00000 with size: 0.996277 MiB 00:31:38.618 element at address: 0x200031c00000 with size: 0.994446 MiB 00:31:38.618 element at address: 0x200013800000 with size: 0.978699 MiB 00:31:38.618 element at address: 0x200007000000 with size: 0.959839 MiB 00:31:38.618 element at address: 0x200019200000 with size: 0.936584 MiB 00:31:38.618 element at address: 0x200000200000 with size: 0.837036 MiB 00:31:38.618 element at address: 0x20001aa00000 with size: 0.572266 MiB 00:31:38.618 element at address: 0x20000b200000 with size: 0.489807 MiB 00:31:38.618 element at address: 0x200000800000 with size: 0.487061 MiB 00:31:38.618 element at address: 0x200019400000 with size: 0.485657 MiB 00:31:38.618 element at address: 0x200027e00000 with size: 0.398132 MiB 00:31:38.618 element at address: 0x200003a00000 with size: 0.350769 MiB 00:31:38.618 list of standard malloc elements. size: 199.251587 MiB 00:31:38.618 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:31:38.618 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:31:38.618 element at address: 0x200018efff80 with size: 1.000122 MiB 00:31:38.618 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:31:38.618 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:31:38.618 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:31:38.618 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:31:38.618 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:31:38.618 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:31:38.618 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:31:38.618 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:31:38.618 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:31:38.618 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:31:38.618 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:31:38.618 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:31:38.618 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:31:38.618 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:31:38.618 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:31:38.618 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:31:38.618 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:31:38.618 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:31:38.618 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:31:38.618 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:31:38.618 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:31:38.618 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:31:38.618 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:31:38.618 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:31:38.618 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:31:38.618 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:31:38.618 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:31:38.618 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:31:38.618 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:31:38.618 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:31:38.618 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:31:38.618 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:31:38.618 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:31:38.618 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:31:38.619 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:31:38.619 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:31:38.619 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:31:38.619 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:31:38.619 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200003adb300 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200003adb500 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200003affa80 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200003affb40 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:31:38.619 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:31:38.619 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:31:38.619 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:31:38.619 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:31:38.619 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:31:38.619 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200027e65ec0 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200027e65f80 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200027e6cb80 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:31:38.619 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:31:38.620 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:31:38.620 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:31:38.620 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:31:38.620 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:31:38.620 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:31:38.620 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:31:38.620 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:31:38.620 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:31:38.620 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:31:38.620 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:31:38.620 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:31:38.620 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:31:38.620 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:31:38.620 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:31:38.620 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:31:38.620 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:31:38.620 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:31:38.620 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:31:38.620 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:31:38.620 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:31:38.620 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:31:38.620 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:31:38.620 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:31:38.620 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:31:38.620 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:31:38.620 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:31:38.620 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:31:38.620 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:31:38.620 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:31:38.620 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:31:38.620 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:31:38.620 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:31:38.620 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:31:38.620 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:31:38.620 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:31:38.620 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:31:38.620 list of memzone associated elements. size: 602.262573 MiB 00:31:38.620 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:31:38.620 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:31:38.620 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:31:38.620 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:31:38.620 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:31:38.620 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_73974_0 00:31:38.620 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:31:38.620 associated memzone info: size: 48.002930 MiB name: MP_evtpool_73974_0 00:31:38.620 element at address: 0x200003fff380 with size: 48.003052 MiB 00:31:38.620 associated memzone info: size: 48.002930 MiB name: MP_msgpool_73974_0 00:31:38.620 element at address: 0x2000195be940 with size: 20.255554 MiB 00:31:38.620 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:31:38.620 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:31:38.620 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:31:38.620 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:31:38.620 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_73974 00:31:38.620 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:31:38.620 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_73974 00:31:38.620 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:31:38.620 associated memzone info: size: 1.007996 MiB name: MP_evtpool_73974 00:31:38.620 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:31:38.620 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:31:38.620 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:31:38.620 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:31:38.620 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:31:38.620 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:31:38.620 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:31:38.620 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:31:38.620 element at address: 0x200003eff180 with size: 1.000488 MiB 00:31:38.620 associated memzone info: size: 1.000366 MiB name: RG_ring_0_73974 00:31:38.620 element at address: 0x200003affc00 with size: 1.000488 MiB 00:31:38.620 associated memzone info: size: 1.000366 MiB name: RG_ring_1_73974 00:31:38.620 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:31:38.620 associated memzone info: size: 1.000366 MiB name: RG_ring_4_73974 00:31:38.620 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:31:38.620 associated memzone info: size: 1.000366 MiB name: RG_ring_5_73974 00:31:38.620 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:31:38.620 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_73974 00:31:38.620 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:31:38.620 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:31:38.620 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:31:38.620 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:31:38.620 element at address: 0x20001947c540 with size: 0.250488 MiB 00:31:38.620 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:31:38.620 element at address: 0x200003adf880 with size: 0.125488 MiB 00:31:38.620 associated memzone info: size: 0.125366 MiB name: RG_ring_2_73974 00:31:38.620 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:31:38.620 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:31:38.620 element at address: 0x200027e66040 with size: 0.023743 MiB 00:31:38.620 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:31:38.620 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:31:38.620 associated memzone info: size: 0.015991 MiB name: RG_ring_3_73974 00:31:38.620 element at address: 0x200027e6c180 with size: 0.002441 MiB 00:31:38.620 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:31:38.620 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:31:38.620 associated memzone info: size: 0.000183 MiB name: MP_msgpool_73974 00:31:38.620 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:31:38.620 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_73974 00:31:38.620 element at address: 0x200027e6cc40 with size: 0.000305 MiB 00:31:38.620 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:31:38.620 14:44:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:31:38.620 14:44:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 73974 00:31:38.620 14:44:58 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 73974 ']' 00:31:38.620 14:44:58 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 73974 00:31:38.620 14:44:58 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:31:38.620 14:44:58 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:38.620 14:44:58 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73974 00:31:38.620 14:44:58 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:38.620 14:44:58 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:38.620 14:44:58 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73974' 00:31:38.620 killing process with pid 73974 00:31:38.620 14:44:58 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 73974 00:31:38.620 14:44:58 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 73974 00:31:38.880 00:31:38.880 real 0m1.474s 00:31:38.880 user 0m1.490s 00:31:38.880 sys 0m0.392s 00:31:38.880 14:44:58 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:38.880 14:44:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:31:38.880 ************************************ 00:31:38.880 END TEST dpdk_mem_utility 00:31:38.880 ************************************ 00:31:38.880 14:44:58 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:31:38.880 14:44:58 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:31:38.880 14:44:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:38.880 14:44:58 -- common/autotest_common.sh@10 -- # set +x 00:31:38.880 ************************************ 00:31:38.880 START TEST event 00:31:38.880 ************************************ 00:31:38.880 14:44:58 event -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:31:39.139 * Looking for test storage... 00:31:39.139 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:31:39.139 14:44:58 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:31:39.139 14:44:58 event -- bdev/nbd_common.sh@6 -- # set -e 00:31:39.139 14:44:58 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:31:39.139 14:44:58 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:31:39.139 14:44:58 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:39.139 14:44:58 event -- common/autotest_common.sh@10 -- # set +x 00:31:39.139 ************************************ 00:31:39.139 START TEST event_perf 00:31:39.139 ************************************ 00:31:39.139 14:44:58 event.event_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:31:39.139 Running I/O for 1 seconds...[2024-07-22 14:44:58.639083] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:31:39.139 [2024-07-22 14:44:58.639767] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74059 ] 00:31:39.408 [2024-07-22 14:44:58.782698] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:39.408 [2024-07-22 14:44:58.837734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:39.408 [2024-07-22 14:44:58.837846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:39.408 [2024-07-22 14:44:58.838047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:39.408 [2024-07-22 14:44:58.838165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:40.345 Running I/O for 1 seconds... 00:31:40.345 lcore 0: 182291 00:31:40.345 lcore 1: 182289 00:31:40.345 lcore 2: 182288 00:31:40.345 lcore 3: 182290 00:31:40.345 done. 00:31:40.345 00:31:40.345 real 0m1.295s 00:31:40.345 user 0m4.116s 00:31:40.345 sys 0m0.056s 00:31:40.345 14:44:59 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:40.345 14:44:59 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:31:40.345 ************************************ 00:31:40.345 END TEST event_perf 00:31:40.345 ************************************ 00:31:40.345 14:44:59 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:31:40.345 14:44:59 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:31:40.345 14:44:59 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:40.345 14:44:59 event -- common/autotest_common.sh@10 -- # set +x 00:31:40.345 ************************************ 00:31:40.345 START TEST event_reactor 00:31:40.345 ************************************ 00:31:40.345 14:44:59 event.event_reactor -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:31:40.604 [2024-07-22 14:44:59.998048] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:31:40.604 [2024-07-22 14:44:59.998228] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74092 ] 00:31:40.604 [2024-07-22 14:45:00.141879] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:40.604 [2024-07-22 14:45:00.195966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:41.985 test_start 00:31:41.985 oneshot 00:31:41.985 tick 100 00:31:41.985 tick 100 00:31:41.985 tick 250 00:31:41.985 tick 100 00:31:41.985 tick 100 00:31:41.985 tick 250 00:31:41.985 tick 100 00:31:41.985 tick 500 00:31:41.985 tick 100 00:31:41.985 tick 100 00:31:41.985 tick 250 00:31:41.985 tick 100 00:31:41.985 tick 100 00:31:41.985 test_end 00:31:41.985 00:31:41.985 real 0m1.290s 00:31:41.985 user 0m1.135s 00:31:41.985 sys 0m0.050s 00:31:41.985 14:45:01 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:41.985 14:45:01 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:31:41.985 ************************************ 00:31:41.985 END TEST event_reactor 00:31:41.985 ************************************ 00:31:41.985 14:45:01 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:31:41.985 14:45:01 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:31:41.985 14:45:01 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:41.985 14:45:01 event -- common/autotest_common.sh@10 -- # set +x 00:31:41.985 ************************************ 00:31:41.985 START TEST event_reactor_perf 00:31:41.985 ************************************ 00:31:41.985 14:45:01 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:31:41.985 [2024-07-22 14:45:01.352315] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:31:41.985 [2024-07-22 14:45:01.352428] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74133 ] 00:31:41.985 [2024-07-22 14:45:01.492655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:41.985 [2024-07-22 14:45:01.542268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:43.369 test_start 00:31:43.369 test_end 00:31:43.369 Performance: 435545 events per second 00:31:43.369 00:31:43.369 real 0m1.285s 00:31:43.369 user 0m1.131s 00:31:43.369 sys 0m0.049s 00:31:43.369 ************************************ 00:31:43.369 END TEST event_reactor_perf 00:31:43.369 ************************************ 00:31:43.369 14:45:02 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:43.369 14:45:02 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:31:43.369 14:45:02 event -- event/event.sh@49 -- # uname -s 00:31:43.369 14:45:02 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:31:43.369 14:45:02 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:31:43.369 14:45:02 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:31:43.369 14:45:02 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:43.370 14:45:02 event -- common/autotest_common.sh@10 -- # set +x 00:31:43.370 ************************************ 00:31:43.370 START TEST event_scheduler 00:31:43.370 ************************************ 00:31:43.370 14:45:02 event.event_scheduler -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:31:43.370 * Looking for test storage... 00:31:43.370 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:31:43.370 14:45:02 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:31:43.370 14:45:02 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=74189 00:31:43.370 14:45:02 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:31:43.370 14:45:02 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 74189 00:31:43.370 14:45:02 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 74189 ']' 00:31:43.370 14:45:02 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:31:43.370 14:45:02 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:43.370 14:45:02 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:43.370 14:45:02 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:43.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:43.370 14:45:02 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:43.370 14:45:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:31:43.370 [2024-07-22 14:45:02.820818] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:31:43.370 [2024-07-22 14:45:02.820965] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74189 ] 00:31:43.370 [2024-07-22 14:45:02.946141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:43.629 [2024-07-22 14:45:03.003199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:43.629 [2024-07-22 14:45:03.003316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:43.629 [2024-07-22 14:45:03.003817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:43.629 [2024-07-22 14:45:03.003824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:44.198 14:45:03 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:44.198 14:45:03 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:31:44.198 14:45:03 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:31:44.198 14:45:03 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.198 14:45:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:31:44.198 POWER: Env isn't set yet! 00:31:44.198 POWER: Attempting to initialise ACPI cpufreq power management... 00:31:44.198 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:31:44.198 POWER: Cannot set governor of lcore 0 to userspace 00:31:44.198 POWER: Attempting to initialise PSTAT power management... 00:31:44.198 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:31:44.198 POWER: Cannot set governor of lcore 0 to performance 00:31:44.198 POWER: Attempting to initialise CPPC power management... 00:31:44.198 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:31:44.198 POWER: Cannot set governor of lcore 0 to userspace 00:31:44.198 POWER: Attempting to initialise VM power management... 00:31:44.198 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:31:44.198 POWER: Unable to set Power Management Environment for lcore 0 00:31:44.198 [2024-07-22 14:45:03.707466] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:31:44.198 [2024-07-22 14:45:03.707478] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:31:44.198 [2024-07-22 14:45:03.707483] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:31:44.198 [2024-07-22 14:45:03.707492] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:31:44.198 [2024-07-22 14:45:03.707497] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:31:44.198 [2024-07-22 14:45:03.707501] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:31:44.198 14:45:03 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.198 14:45:03 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:31:44.198 14:45:03 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.198 14:45:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:31:44.198 [2024-07-22 14:45:03.777947] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:31:44.198 14:45:03 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.198 14:45:03 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:31:44.198 14:45:03 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:31:44.198 14:45:03 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:44.198 14:45:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:31:44.198 ************************************ 00:31:44.198 START TEST scheduler_create_thread 00:31:44.198 ************************************ 00:31:44.198 14:45:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:31:44.198 14:45:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:31:44.198 14:45:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.198 14:45:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:31:44.198 2 00:31:44.198 14:45:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.198 14:45:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:31:44.198 14:45:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.198 14:45:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:31:44.198 3 00:31:44.198 14:45:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.198 14:45:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:31:44.198 14:45:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.198 14:45:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:31:44.198 4 00:31:44.198 14:45:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.198 14:45:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:31:44.198 14:45:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.198 14:45:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:31:44.458 5 00:31:44.458 14:45:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.458 14:45:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:31:44.458 14:45:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.458 14:45:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:31:44.458 6 00:31:44.458 14:45:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.458 14:45:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:31:44.458 14:45:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.458 14:45:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:31:44.458 7 00:31:44.458 14:45:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.458 14:45:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:31:44.458 14:45:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.458 14:45:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:31:44.458 8 00:31:44.458 14:45:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.458 14:45:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:31:44.458 14:45:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.458 14:45:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:31:44.458 9 00:31:44.458 14:45:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.458 14:45:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:31:44.458 14:45:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.458 14:45:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:31:44.458 10 00:31:44.458 14:45:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.458 14:45:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:31:44.458 14:45:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.458 14:45:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:31:44.458 14:45:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.458 14:45:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:31:44.458 14:45:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:31:44.458 14:45:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.458 14:45:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:31:44.458 14:45:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.458 14:45:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:31:44.458 14:45:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.458 14:45:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:31:45.838 14:45:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:45.838 14:45:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:31:45.838 14:45:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:31:45.838 14:45:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:45.838 14:45:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:31:46.774 ************************************ 00:31:46.774 END TEST scheduler_create_thread 00:31:46.774 ************************************ 00:31:46.774 14:45:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:46.774 00:31:46.774 real 0m2.611s 00:31:46.774 user 0m0.013s 00:31:46.774 sys 0m0.004s 00:31:46.774 14:45:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:46.774 14:45:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:31:47.032 14:45:06 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:31:47.032 14:45:06 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 74189 00:31:47.033 14:45:06 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 74189 ']' 00:31:47.033 14:45:06 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 74189 00:31:47.033 14:45:06 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:31:47.033 14:45:06 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:47.033 14:45:06 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74189 00:31:47.033 killing process with pid 74189 00:31:47.033 14:45:06 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:31:47.033 14:45:06 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:31:47.033 14:45:06 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74189' 00:31:47.033 14:45:06 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 74189 00:31:47.033 14:45:06 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 74189 00:31:47.291 [2024-07-22 14:45:06.879253] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:31:47.550 00:31:47.550 real 0m4.398s 00:31:47.550 user 0m8.260s 00:31:47.550 sys 0m0.351s 00:31:47.550 ************************************ 00:31:47.550 END TEST event_scheduler 00:31:47.550 ************************************ 00:31:47.550 14:45:07 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:47.550 14:45:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:31:47.550 14:45:07 event -- event/event.sh@51 -- # modprobe -n nbd 00:31:47.550 14:45:07 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:31:47.550 14:45:07 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:31:47.550 14:45:07 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:47.550 14:45:07 event -- common/autotest_common.sh@10 -- # set +x 00:31:47.550 ************************************ 00:31:47.550 START TEST app_repeat 00:31:47.550 ************************************ 00:31:47.550 14:45:07 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:31:47.550 14:45:07 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:47.550 14:45:07 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:47.550 14:45:07 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:31:47.550 14:45:07 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:31:47.550 14:45:07 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:31:47.550 14:45:07 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:31:47.550 14:45:07 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:31:47.550 14:45:07 event.app_repeat -- event/event.sh@19 -- # repeat_pid=74302 00:31:47.550 14:45:07 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:31:47.550 14:45:07 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:31:47.550 14:45:07 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 74302' 00:31:47.550 Process app_repeat pid: 74302 00:31:47.550 14:45:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:31:47.550 14:45:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:31:47.550 spdk_app_start Round 0 00:31:47.550 14:45:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 74302 /var/tmp/spdk-nbd.sock 00:31:47.550 14:45:07 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 74302 ']' 00:31:47.550 14:45:07 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:31:47.550 14:45:07 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:47.550 14:45:07 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:31:47.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:31:47.550 14:45:07 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:47.550 14:45:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:31:47.810 [2024-07-22 14:45:07.188196] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:31:47.810 [2024-07-22 14:45:07.188344] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74302 ] 00:31:47.810 [2024-07-22 14:45:07.327936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:47.810 [2024-07-22 14:45:07.381824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:47.810 [2024-07-22 14:45:07.381826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:48.763 14:45:08 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:48.763 14:45:08 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:31:48.763 14:45:08 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:31:48.763 Malloc0 00:31:48.763 14:45:08 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:31:49.023 Malloc1 00:31:49.023 14:45:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:31:49.023 14:45:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:49.023 14:45:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:31:49.023 14:45:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:31:49.023 14:45:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:49.023 14:45:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:31:49.023 14:45:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:31:49.023 14:45:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:49.023 14:45:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:31:49.023 14:45:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:49.023 14:45:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:49.023 14:45:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:49.023 14:45:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:31:49.023 14:45:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:49.023 14:45:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:49.023 14:45:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:31:49.287 /dev/nbd0 00:31:49.287 14:45:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:49.287 14:45:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:49.287 14:45:08 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:31:49.287 14:45:08 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:31:49.287 14:45:08 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:31:49.287 14:45:08 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:31:49.287 14:45:08 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:31:49.287 14:45:08 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:31:49.287 14:45:08 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:31:49.287 14:45:08 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:31:49.287 14:45:08 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:31:49.287 1+0 records in 00:31:49.287 1+0 records out 00:31:49.287 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000575464 s, 7.1 MB/s 00:31:49.287 14:45:08 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:31:49.287 14:45:08 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:31:49.287 14:45:08 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:31:49.287 14:45:08 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:31:49.287 14:45:08 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:31:49.287 14:45:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:49.287 14:45:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:49.287 14:45:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:31:49.546 /dev/nbd1 00:31:49.546 14:45:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:49.546 14:45:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:49.546 14:45:08 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:31:49.546 14:45:08 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:31:49.546 14:45:08 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:31:49.546 14:45:08 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:31:49.546 14:45:08 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:31:49.546 14:45:08 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:31:49.546 14:45:08 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:31:49.546 14:45:08 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:31:49.546 14:45:08 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:31:49.546 1+0 records in 00:31:49.546 1+0 records out 00:31:49.546 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206231 s, 19.9 MB/s 00:31:49.546 14:45:08 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:31:49.546 14:45:08 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:31:49.546 14:45:08 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:31:49.546 14:45:08 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:31:49.546 14:45:08 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:31:49.546 14:45:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:49.546 14:45:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:49.546 14:45:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:49.546 14:45:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:49.546 14:45:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:49.805 14:45:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:31:49.805 { 00:31:49.805 "bdev_name": "Malloc0", 00:31:49.805 "nbd_device": "/dev/nbd0" 00:31:49.805 }, 00:31:49.805 { 00:31:49.805 "bdev_name": "Malloc1", 00:31:49.805 "nbd_device": "/dev/nbd1" 00:31:49.805 } 00:31:49.805 ]' 00:31:49.805 14:45:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:49.805 14:45:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:31:49.805 { 00:31:49.805 "bdev_name": "Malloc0", 00:31:49.805 "nbd_device": "/dev/nbd0" 00:31:49.805 }, 00:31:49.805 { 00:31:49.805 "bdev_name": "Malloc1", 00:31:49.805 "nbd_device": "/dev/nbd1" 00:31:49.805 } 00:31:49.805 ]' 00:31:49.805 14:45:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:31:49.805 /dev/nbd1' 00:31:49.805 14:45:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:49.805 14:45:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:31:49.805 /dev/nbd1' 00:31:49.805 14:45:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:31:49.805 14:45:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:31:49.805 14:45:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:31:49.805 14:45:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:31:49.805 14:45:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:31:49.805 14:45:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:49.805 14:45:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:31:49.805 14:45:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:31:49.805 14:45:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:31:49.805 14:45:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:31:49.806 14:45:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:31:49.806 256+0 records in 00:31:49.806 256+0 records out 00:31:49.806 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118399 s, 88.6 MB/s 00:31:49.806 14:45:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:31:49.806 14:45:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:31:49.806 256+0 records in 00:31:49.806 256+0 records out 00:31:49.806 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0192557 s, 54.5 MB/s 00:31:49.806 14:45:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:31:49.806 14:45:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:31:49.806 256+0 records in 00:31:49.806 256+0 records out 00:31:49.806 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023561 s, 44.5 MB/s 00:31:49.806 14:45:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:31:49.806 14:45:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:49.806 14:45:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:31:49.806 14:45:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:31:49.806 14:45:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:31:49.806 14:45:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:31:49.806 14:45:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:31:49.806 14:45:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:31:49.806 14:45:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:31:49.806 14:45:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:31:49.806 14:45:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:31:49.806 14:45:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:31:49.806 14:45:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:31:49.806 14:45:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:49.806 14:45:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:49.806 14:45:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:49.806 14:45:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:31:49.806 14:45:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:49.806 14:45:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:50.065 14:45:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:50.065 14:45:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:50.065 14:45:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:50.065 14:45:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:50.065 14:45:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:50.065 14:45:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:50.065 14:45:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:31:50.065 14:45:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:31:50.065 14:45:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:50.065 14:45:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:31:50.323 14:45:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:50.323 14:45:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:50.323 14:45:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:50.323 14:45:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:50.323 14:45:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:50.324 14:45:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:50.324 14:45:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:31:50.324 14:45:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:31:50.324 14:45:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:50.324 14:45:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:50.324 14:45:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:50.582 14:45:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:31:50.582 14:45:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:31:50.582 14:45:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:50.582 14:45:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:31:50.582 14:45:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:31:50.582 14:45:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:50.582 14:45:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:31:50.582 14:45:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:31:50.582 14:45:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:31:50.582 14:45:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:31:50.582 14:45:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:31:50.582 14:45:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:31:50.582 14:45:10 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:31:50.841 14:45:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:31:50.841 [2024-07-22 14:45:10.421393] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:50.841 [2024-07-22 14:45:10.469179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:50.841 [2024-07-22 14:45:10.469181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:51.100 [2024-07-22 14:45:10.509841] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:31:51.100 [2024-07-22 14:45:10.509890] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:31:54.417 14:45:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:31:54.418 spdk_app_start Round 1 00:31:54.418 14:45:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:31:54.418 14:45:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 74302 /var/tmp/spdk-nbd.sock 00:31:54.418 14:45:13 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 74302 ']' 00:31:54.418 14:45:13 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:31:54.418 14:45:13 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:54.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:31:54.418 14:45:13 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:31:54.418 14:45:13 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:54.418 14:45:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:31:54.418 14:45:13 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:54.418 14:45:13 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:31:54.418 14:45:13 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:31:54.418 Malloc0 00:31:54.418 14:45:13 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:31:54.418 Malloc1 00:31:54.418 14:45:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:31:54.418 14:45:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:54.418 14:45:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:31:54.418 14:45:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:31:54.418 14:45:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:54.418 14:45:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:31:54.418 14:45:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:31:54.418 14:45:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:54.418 14:45:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:31:54.418 14:45:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:54.418 14:45:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:54.418 14:45:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:54.418 14:45:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:31:54.418 14:45:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:54.418 14:45:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:54.418 14:45:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:31:54.680 /dev/nbd0 00:31:54.680 14:45:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:54.680 14:45:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:54.680 14:45:14 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:31:54.680 14:45:14 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:31:54.680 14:45:14 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:31:54.680 14:45:14 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:31:54.680 14:45:14 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:31:54.680 14:45:14 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:31:54.680 14:45:14 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:31:54.680 14:45:14 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:31:54.680 14:45:14 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:31:54.680 1+0 records in 00:31:54.680 1+0 records out 00:31:54.680 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000419988 s, 9.8 MB/s 00:31:54.680 14:45:14 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:31:54.680 14:45:14 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:31:54.680 14:45:14 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:31:54.680 14:45:14 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:31:54.680 14:45:14 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:31:54.680 14:45:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:54.680 14:45:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:54.680 14:45:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:31:54.939 /dev/nbd1 00:31:54.939 14:45:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:54.939 14:45:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:54.939 14:45:14 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:31:54.939 14:45:14 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:31:54.939 14:45:14 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:31:54.939 14:45:14 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:31:54.939 14:45:14 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:31:54.939 14:45:14 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:31:54.939 14:45:14 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:31:54.939 14:45:14 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:31:54.939 14:45:14 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:31:54.939 1+0 records in 00:31:54.939 1+0 records out 00:31:54.940 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000297486 s, 13.8 MB/s 00:31:54.940 14:45:14 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:31:54.940 14:45:14 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:31:54.940 14:45:14 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:31:54.940 14:45:14 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:31:54.940 14:45:14 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:31:54.940 14:45:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:54.940 14:45:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:54.940 14:45:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:54.940 14:45:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:54.940 14:45:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:55.199 14:45:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:31:55.199 { 00:31:55.199 "bdev_name": "Malloc0", 00:31:55.199 "nbd_device": "/dev/nbd0" 00:31:55.199 }, 00:31:55.199 { 00:31:55.199 "bdev_name": "Malloc1", 00:31:55.199 "nbd_device": "/dev/nbd1" 00:31:55.199 } 00:31:55.199 ]' 00:31:55.199 14:45:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:55.199 14:45:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:31:55.199 { 00:31:55.199 "bdev_name": "Malloc0", 00:31:55.199 "nbd_device": "/dev/nbd0" 00:31:55.199 }, 00:31:55.199 { 00:31:55.199 "bdev_name": "Malloc1", 00:31:55.199 "nbd_device": "/dev/nbd1" 00:31:55.199 } 00:31:55.199 ]' 00:31:55.199 14:45:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:31:55.199 /dev/nbd1' 00:31:55.199 14:45:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:31:55.199 /dev/nbd1' 00:31:55.199 14:45:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:55.199 14:45:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:31:55.199 14:45:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:31:55.199 14:45:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:31:55.199 14:45:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:31:55.199 14:45:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:31:55.199 14:45:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:55.199 14:45:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:31:55.199 14:45:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:31:55.199 14:45:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:31:55.199 14:45:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:31:55.199 14:45:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:31:55.199 256+0 records in 00:31:55.199 256+0 records out 00:31:55.199 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0135678 s, 77.3 MB/s 00:31:55.199 14:45:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:31:55.199 14:45:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:31:55.199 256+0 records in 00:31:55.199 256+0 records out 00:31:55.199 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0221299 s, 47.4 MB/s 00:31:55.199 14:45:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:31:55.199 14:45:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:31:55.459 256+0 records in 00:31:55.459 256+0 records out 00:31:55.459 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.019281 s, 54.4 MB/s 00:31:55.459 14:45:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:31:55.459 14:45:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:55.459 14:45:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:31:55.459 14:45:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:31:55.459 14:45:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:31:55.459 14:45:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:31:55.459 14:45:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:31:55.459 14:45:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:31:55.459 14:45:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:31:55.459 14:45:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:31:55.459 14:45:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:31:55.459 14:45:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:31:55.459 14:45:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:31:55.459 14:45:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:55.459 14:45:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:55.459 14:45:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:55.459 14:45:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:31:55.459 14:45:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:55.459 14:45:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:55.459 14:45:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:55.459 14:45:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:55.459 14:45:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:55.459 14:45:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:55.459 14:45:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:55.459 14:45:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:55.459 14:45:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:31:55.459 14:45:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:31:55.459 14:45:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:55.459 14:45:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:31:55.719 14:45:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:55.719 14:45:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:55.719 14:45:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:55.719 14:45:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:55.719 14:45:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:55.719 14:45:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:55.719 14:45:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:31:55.719 14:45:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:31:55.719 14:45:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:55.719 14:45:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:55.719 14:45:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:55.979 14:45:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:31:55.979 14:45:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:31:55.979 14:45:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:55.979 14:45:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:31:55.979 14:45:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:31:55.979 14:45:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:55.979 14:45:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:31:55.979 14:45:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:31:55.979 14:45:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:31:55.979 14:45:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:31:55.979 14:45:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:31:55.979 14:45:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:31:55.979 14:45:15 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:31:56.239 14:45:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:31:56.498 [2024-07-22 14:45:15.954070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:56.498 [2024-07-22 14:45:16.006566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:56.498 [2024-07-22 14:45:16.006568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:56.498 [2024-07-22 14:45:16.049267] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:31:56.498 [2024-07-22 14:45:16.049321] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:31:59.803 14:45:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:31:59.803 spdk_app_start Round 2 00:31:59.803 14:45:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:31:59.803 14:45:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 74302 /var/tmp/spdk-nbd.sock 00:31:59.803 14:45:18 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 74302 ']' 00:31:59.803 14:45:18 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:31:59.803 14:45:18 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:59.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:31:59.803 14:45:18 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:31:59.803 14:45:18 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:59.803 14:45:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:31:59.803 14:45:19 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:59.803 14:45:19 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:31:59.803 14:45:19 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:31:59.803 Malloc0 00:31:59.803 14:45:19 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:32:00.102 Malloc1 00:32:00.102 14:45:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:32:00.102 14:45:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:00.102 14:45:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:32:00.102 14:45:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:32:00.102 14:45:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:00.102 14:45:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:32:00.102 14:45:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:32:00.102 14:45:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:00.102 14:45:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:32:00.102 14:45:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:00.102 14:45:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:00.102 14:45:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:00.102 14:45:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:32:00.102 14:45:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:00.102 14:45:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:00.103 14:45:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:32:00.103 /dev/nbd0 00:32:00.103 14:45:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:00.103 14:45:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:00.103 14:45:19 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:32:00.103 14:45:19 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:32:00.103 14:45:19 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:32:00.103 14:45:19 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:32:00.103 14:45:19 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:32:00.361 14:45:19 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:32:00.361 14:45:19 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:32:00.361 14:45:19 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:32:00.361 14:45:19 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:32:00.361 1+0 records in 00:32:00.361 1+0 records out 00:32:00.361 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000408926 s, 10.0 MB/s 00:32:00.361 14:45:19 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:32:00.361 14:45:19 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:32:00.361 14:45:19 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:32:00.361 14:45:19 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:32:00.361 14:45:19 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:32:00.361 14:45:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:00.362 14:45:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:00.362 14:45:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:32:00.362 /dev/nbd1 00:32:00.362 14:45:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:32:00.362 14:45:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:32:00.362 14:45:19 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:32:00.362 14:45:19 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:32:00.621 14:45:19 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:32:00.621 14:45:19 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:32:00.621 14:45:19 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:32:00.621 14:45:19 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:32:00.621 14:45:19 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:32:00.621 14:45:19 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:32:00.621 14:45:19 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:32:00.621 1+0 records in 00:32:00.621 1+0 records out 00:32:00.621 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000526038 s, 7.8 MB/s 00:32:00.621 14:45:20 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:32:00.621 14:45:20 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:32:00.621 14:45:20 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:32:00.621 14:45:20 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:32:00.621 14:45:20 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:32:00.621 14:45:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:00.621 14:45:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:00.621 14:45:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:32:00.621 14:45:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:00.621 14:45:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:32:00.621 14:45:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:32:00.621 { 00:32:00.621 "bdev_name": "Malloc0", 00:32:00.621 "nbd_device": "/dev/nbd0" 00:32:00.621 }, 00:32:00.621 { 00:32:00.621 "bdev_name": "Malloc1", 00:32:00.621 "nbd_device": "/dev/nbd1" 00:32:00.621 } 00:32:00.621 ]' 00:32:00.621 14:45:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:32:00.621 { 00:32:00.621 "bdev_name": "Malloc0", 00:32:00.621 "nbd_device": "/dev/nbd0" 00:32:00.621 }, 00:32:00.621 { 00:32:00.621 "bdev_name": "Malloc1", 00:32:00.621 "nbd_device": "/dev/nbd1" 00:32:00.621 } 00:32:00.621 ]' 00:32:00.621 14:45:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:32:00.879 14:45:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:32:00.879 /dev/nbd1' 00:32:00.879 14:45:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:32:00.879 14:45:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:32:00.879 /dev/nbd1' 00:32:00.879 14:45:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:32:00.879 14:45:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:32:00.879 14:45:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:32:00.879 14:45:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:32:00.879 14:45:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:32:00.880 14:45:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:00.880 14:45:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:32:00.880 14:45:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:32:00.880 14:45:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:32:00.880 14:45:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:32:00.880 14:45:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:32:00.880 256+0 records in 00:32:00.880 256+0 records out 00:32:00.880 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120437 s, 87.1 MB/s 00:32:00.880 14:45:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:32:00.880 14:45:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:32:00.880 256+0 records in 00:32:00.880 256+0 records out 00:32:00.880 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0227119 s, 46.2 MB/s 00:32:00.880 14:45:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:32:00.880 14:45:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:32:00.880 256+0 records in 00:32:00.880 256+0 records out 00:32:00.880 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0223673 s, 46.9 MB/s 00:32:00.880 14:45:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:32:00.880 14:45:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:00.880 14:45:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:32:00.880 14:45:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:32:00.880 14:45:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:32:00.880 14:45:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:32:00.880 14:45:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:32:00.880 14:45:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:32:00.880 14:45:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:32:00.880 14:45:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:32:00.880 14:45:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:32:00.880 14:45:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:32:00.880 14:45:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:32:00.880 14:45:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:00.880 14:45:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:00.880 14:45:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:00.880 14:45:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:32:00.880 14:45:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:00.880 14:45:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:32:01.138 14:45:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:01.138 14:45:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:01.138 14:45:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:01.138 14:45:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:01.138 14:45:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:01.138 14:45:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:01.138 14:45:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:32:01.138 14:45:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:32:01.138 14:45:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:01.138 14:45:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:32:01.396 14:45:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:32:01.396 14:45:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:32:01.396 14:45:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:32:01.396 14:45:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:01.396 14:45:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:01.396 14:45:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:32:01.396 14:45:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:32:01.396 14:45:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:32:01.396 14:45:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:32:01.396 14:45:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:01.396 14:45:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:32:01.396 14:45:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:32:01.396 14:45:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:32:01.396 14:45:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:32:01.655 14:45:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:32:01.655 14:45:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:32:01.655 14:45:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:32:01.655 14:45:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:32:01.655 14:45:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:32:01.655 14:45:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:32:01.655 14:45:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:32:01.655 14:45:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:32:01.655 14:45:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:32:01.655 14:45:21 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:32:01.915 14:45:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:32:01.915 [2024-07-22 14:45:21.447840] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:01.915 [2024-07-22 14:45:21.496926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:01.915 [2024-07-22 14:45:21.496929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:01.915 [2024-07-22 14:45:21.539360] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:32:01.915 [2024-07-22 14:45:21.539414] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:32:05.203 14:45:24 event.app_repeat -- event/event.sh@38 -- # waitforlisten 74302 /var/tmp/spdk-nbd.sock 00:32:05.203 14:45:24 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 74302 ']' 00:32:05.203 14:45:24 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:32:05.203 14:45:24 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:05.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:32:05.203 14:45:24 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:32:05.203 14:45:24 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:05.203 14:45:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:32:05.203 14:45:24 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:05.203 14:45:24 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:32:05.203 14:45:24 event.app_repeat -- event/event.sh@39 -- # killprocess 74302 00:32:05.203 14:45:24 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 74302 ']' 00:32:05.203 14:45:24 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 74302 00:32:05.203 14:45:24 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:32:05.203 14:45:24 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:05.203 14:45:24 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74302 00:32:05.203 14:45:24 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:05.203 14:45:24 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:05.203 14:45:24 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74302' 00:32:05.203 killing process with pid 74302 00:32:05.203 14:45:24 event.app_repeat -- common/autotest_common.sh@965 -- # kill 74302 00:32:05.203 14:45:24 event.app_repeat -- common/autotest_common.sh@970 -- # wait 74302 00:32:05.203 spdk_app_start is called in Round 0. 00:32:05.203 Shutdown signal received, stop current app iteration 00:32:05.203 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 reinitialization... 00:32:05.203 spdk_app_start is called in Round 1. 00:32:05.203 Shutdown signal received, stop current app iteration 00:32:05.203 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 reinitialization... 00:32:05.203 spdk_app_start is called in Round 2. 00:32:05.203 Shutdown signal received, stop current app iteration 00:32:05.203 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 reinitialization... 00:32:05.203 spdk_app_start is called in Round 3. 00:32:05.203 Shutdown signal received, stop current app iteration 00:32:05.203 14:45:24 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:32:05.203 14:45:24 event.app_repeat -- event/event.sh@42 -- # return 0 00:32:05.204 00:32:05.204 real 0m17.570s 00:32:05.204 user 0m38.815s 00:32:05.204 sys 0m2.808s 00:32:05.204 14:45:24 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:05.204 14:45:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:32:05.204 ************************************ 00:32:05.204 END TEST app_repeat 00:32:05.204 ************************************ 00:32:05.204 14:45:24 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:32:05.204 14:45:24 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:32:05.204 14:45:24 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:32:05.204 14:45:24 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:05.204 14:45:24 event -- common/autotest_common.sh@10 -- # set +x 00:32:05.204 ************************************ 00:32:05.204 START TEST cpu_locks 00:32:05.204 ************************************ 00:32:05.204 14:45:24 event.cpu_locks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:32:05.463 * Looking for test storage... 00:32:05.463 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:32:05.463 14:45:24 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:32:05.463 14:45:24 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:32:05.463 14:45:24 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:32:05.463 14:45:24 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:32:05.463 14:45:24 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:32:05.463 14:45:24 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:05.463 14:45:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:32:05.463 ************************************ 00:32:05.463 START TEST default_locks 00:32:05.463 ************************************ 00:32:05.463 14:45:24 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:32:05.463 14:45:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=74918 00:32:05.463 14:45:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:32:05.463 14:45:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 74918 00:32:05.463 14:45:24 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 74918 ']' 00:32:05.463 14:45:24 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:05.463 14:45:24 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:05.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:05.463 14:45:24 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:05.463 14:45:24 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:05.463 14:45:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:32:05.463 [2024-07-22 14:45:24.963781] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:32:05.463 [2024-07-22 14:45:24.963856] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74918 ] 00:32:05.722 [2024-07-22 14:45:25.105309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:05.722 [2024-07-22 14:45:25.156579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:06.289 14:45:25 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:06.289 14:45:25 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:32:06.289 14:45:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 74918 00:32:06.289 14:45:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 74918 00:32:06.289 14:45:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:32:06.548 14:45:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 74918 00:32:06.548 14:45:26 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 74918 ']' 00:32:06.548 14:45:26 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 74918 00:32:06.548 14:45:26 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:32:06.548 14:45:26 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:06.548 14:45:26 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74918 00:32:06.548 14:45:26 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:06.548 14:45:26 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:06.548 killing process with pid 74918 00:32:06.548 14:45:26 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74918' 00:32:06.548 14:45:26 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 74918 00:32:06.548 14:45:26 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 74918 00:32:06.806 14:45:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 74918 00:32:06.806 14:45:26 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:32:06.806 14:45:26 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 74918 00:32:06.806 14:45:26 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:32:06.806 14:45:26 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:06.806 14:45:26 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:32:06.807 14:45:26 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:06.807 14:45:26 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 74918 00:32:06.807 14:45:26 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 74918 ']' 00:32:06.807 14:45:26 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:06.807 14:45:26 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:06.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:06.807 14:45:26 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:06.807 14:45:26 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:06.807 14:45:26 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:32:06.807 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (74918) - No such process 00:32:06.807 ERROR: process (pid: 74918) is no longer running 00:32:06.807 14:45:26 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:06.807 14:45:26 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:32:06.807 14:45:26 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:32:06.807 14:45:26 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:06.807 14:45:26 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:06.807 14:45:26 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:06.807 14:45:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:32:06.807 14:45:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:32:06.807 14:45:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:32:06.807 14:45:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:32:06.807 00:32:06.807 real 0m1.521s 00:32:06.807 user 0m1.565s 00:32:06.807 sys 0m0.452s 00:32:06.807 14:45:26 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:06.807 14:45:26 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:32:06.807 ************************************ 00:32:06.807 END TEST default_locks 00:32:06.807 ************************************ 00:32:07.065 14:45:26 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:32:07.065 14:45:26 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:32:07.065 14:45:26 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:07.065 14:45:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:32:07.065 ************************************ 00:32:07.065 START TEST default_locks_via_rpc 00:32:07.065 ************************************ 00:32:07.065 14:45:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:32:07.065 14:45:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=74976 00:32:07.065 14:45:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:32:07.065 14:45:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 74976 00:32:07.065 14:45:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 74976 ']' 00:32:07.065 14:45:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:07.065 14:45:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:07.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:07.065 14:45:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:07.065 14:45:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:07.065 14:45:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:07.065 [2024-07-22 14:45:26.544500] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:32:07.065 [2024-07-22 14:45:26.544581] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74976 ] 00:32:07.065 [2024-07-22 14:45:26.683439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:07.323 [2024-07-22 14:45:26.735673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:07.890 14:45:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:07.890 14:45:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:32:07.890 14:45:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:32:07.890 14:45:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.890 14:45:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:07.890 14:45:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.890 14:45:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:32:07.890 14:45:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:32:07.890 14:45:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:32:07.890 14:45:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:32:07.890 14:45:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:32:07.890 14:45:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.890 14:45:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:07.890 14:45:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.890 14:45:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 74976 00:32:07.890 14:45:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 74976 00:32:07.890 14:45:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:32:08.149 14:45:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 74976 00:32:08.149 14:45:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 74976 ']' 00:32:08.149 14:45:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 74976 00:32:08.149 14:45:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:32:08.149 14:45:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:08.149 14:45:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74976 00:32:08.408 14:45:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:08.408 14:45:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:08.408 killing process with pid 74976 00:32:08.408 14:45:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74976' 00:32:08.408 14:45:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 74976 00:32:08.408 14:45:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 74976 00:32:08.667 00:32:08.667 real 0m1.629s 00:32:08.667 user 0m1.708s 00:32:08.667 sys 0m0.487s 00:32:08.667 14:45:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:08.667 14:45:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:08.667 ************************************ 00:32:08.667 END TEST default_locks_via_rpc 00:32:08.667 ************************************ 00:32:08.667 14:45:28 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:32:08.667 14:45:28 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:32:08.667 14:45:28 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:08.667 14:45:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:32:08.667 ************************************ 00:32:08.667 START TEST non_locking_app_on_locked_coremask 00:32:08.667 ************************************ 00:32:08.667 14:45:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:32:08.667 14:45:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=75041 00:32:08.667 14:45:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:32:08.667 14:45:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 75041 /var/tmp/spdk.sock 00:32:08.667 14:45:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 75041 ']' 00:32:08.667 14:45:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:08.667 14:45:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:08.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:08.667 14:45:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:08.667 14:45:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:08.667 14:45:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:32:08.667 [2024-07-22 14:45:28.239478] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:32:08.667 [2024-07-22 14:45:28.239560] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75041 ] 00:32:08.927 [2024-07-22 14:45:28.377824] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:08.927 [2024-07-22 14:45:28.428826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:09.878 14:45:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:09.878 14:45:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:32:09.878 14:45:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=75069 00:32:09.878 14:45:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:32:09.878 14:45:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 75069 /var/tmp/spdk2.sock 00:32:09.878 14:45:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 75069 ']' 00:32:09.878 14:45:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:32:09.878 14:45:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:09.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:32:09.878 14:45:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:32:09.878 14:45:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:09.878 14:45:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:32:09.878 [2024-07-22 14:45:29.185785] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:32:09.878 [2024-07-22 14:45:29.185872] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75069 ] 00:32:09.878 [2024-07-22 14:45:29.316424] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:32:09.878 [2024-07-22 14:45:29.316491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:09.878 [2024-07-22 14:45:29.419193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:10.816 14:45:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:10.816 14:45:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:32:10.816 14:45:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 75041 00:32:10.816 14:45:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 75041 00:32:10.816 14:45:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:32:11.075 14:45:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 75041 00:32:11.075 14:45:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 75041 ']' 00:32:11.075 14:45:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 75041 00:32:11.075 14:45:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:32:11.075 14:45:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:11.075 14:45:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75041 00:32:11.075 14:45:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:11.075 14:45:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:11.075 killing process with pid 75041 00:32:11.075 14:45:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75041' 00:32:11.075 14:45:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 75041 00:32:11.075 14:45:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 75041 00:32:11.642 14:45:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 75069 00:32:11.642 14:45:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 75069 ']' 00:32:11.642 14:45:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 75069 00:32:11.642 14:45:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:32:11.642 14:45:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:11.642 14:45:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75069 00:32:11.642 14:45:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:11.642 14:45:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:11.642 killing process with pid 75069 00:32:11.642 14:45:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75069' 00:32:11.642 14:45:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 75069 00:32:11.642 14:45:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 75069 00:32:12.211 00:32:12.211 real 0m3.396s 00:32:12.211 user 0m3.722s 00:32:12.211 sys 0m0.919s 00:32:12.211 14:45:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:12.211 14:45:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:32:12.211 ************************************ 00:32:12.211 END TEST non_locking_app_on_locked_coremask 00:32:12.211 ************************************ 00:32:12.211 14:45:31 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:32:12.211 14:45:31 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:32:12.211 14:45:31 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:12.211 14:45:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:32:12.211 ************************************ 00:32:12.211 START TEST locking_app_on_unlocked_coremask 00:32:12.211 ************************************ 00:32:12.211 14:45:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:32:12.211 14:45:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=75137 00:32:12.211 14:45:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:32:12.211 14:45:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 75137 /var/tmp/spdk.sock 00:32:12.211 14:45:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 75137 ']' 00:32:12.211 14:45:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:12.211 14:45:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:12.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:12.211 14:45:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:12.211 14:45:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:12.211 14:45:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:32:12.211 [2024-07-22 14:45:31.687528] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:32:12.211 [2024-07-22 14:45:31.687612] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75137 ] 00:32:12.211 [2024-07-22 14:45:31.826908] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:32:12.211 [2024-07-22 14:45:31.826971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:12.477 [2024-07-22 14:45:31.882232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:13.067 14:45:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:13.067 14:45:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:32:13.067 14:45:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=75165 00:32:13.067 14:45:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:32:13.067 14:45:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 75165 /var/tmp/spdk2.sock 00:32:13.067 14:45:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 75165 ']' 00:32:13.067 14:45:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:32:13.067 14:45:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:13.067 14:45:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:32:13.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:32:13.067 14:45:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:13.067 14:45:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:32:13.067 [2024-07-22 14:45:32.630978] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:32:13.067 [2024-07-22 14:45:32.631060] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75165 ] 00:32:13.326 [2024-07-22 14:45:32.758971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:13.326 [2024-07-22 14:45:32.865761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:13.892 14:45:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:13.892 14:45:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:32:13.892 14:45:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 75165 00:32:13.892 14:45:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 75165 00:32:13.892 14:45:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:32:14.460 14:45:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 75137 00:32:14.460 14:45:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 75137 ']' 00:32:14.460 14:45:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 75137 00:32:14.460 14:45:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:32:14.460 14:45:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:14.460 14:45:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75137 00:32:14.460 14:45:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:14.460 14:45:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:14.460 killing process with pid 75137 00:32:14.460 14:45:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75137' 00:32:14.460 14:45:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 75137 00:32:14.460 14:45:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 75137 00:32:15.097 14:45:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 75165 00:32:15.097 14:45:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 75165 ']' 00:32:15.097 14:45:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 75165 00:32:15.097 14:45:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:32:15.097 14:45:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:15.097 14:45:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75165 00:32:15.097 14:45:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:15.097 14:45:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:15.097 killing process with pid 75165 00:32:15.097 14:45:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75165' 00:32:15.097 14:45:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 75165 00:32:15.097 14:45:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 75165 00:32:15.358 00:32:15.358 real 0m3.219s 00:32:15.358 user 0m3.486s 00:32:15.358 sys 0m0.883s 00:32:15.358 14:45:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:15.358 14:45:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:32:15.358 ************************************ 00:32:15.358 END TEST locking_app_on_unlocked_coremask 00:32:15.358 ************************************ 00:32:15.358 14:45:34 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:32:15.358 14:45:34 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:32:15.358 14:45:34 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:15.358 14:45:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:32:15.358 ************************************ 00:32:15.358 START TEST locking_app_on_locked_coremask 00:32:15.358 ************************************ 00:32:15.358 14:45:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:32:15.358 14:45:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=75238 00:32:15.358 14:45:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:32:15.358 14:45:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 75238 /var/tmp/spdk.sock 00:32:15.358 14:45:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 75238 ']' 00:32:15.358 14:45:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:15.358 14:45:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:15.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:15.358 14:45:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:15.358 14:45:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:15.358 14:45:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:32:15.358 [2024-07-22 14:45:34.981985] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:32:15.358 [2024-07-22 14:45:34.982084] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75238 ] 00:32:15.616 [2024-07-22 14:45:35.121458] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:15.616 [2024-07-22 14:45:35.173507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:16.551 14:45:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:16.551 14:45:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:32:16.551 14:45:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=75261 00:32:16.551 14:45:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 75261 /var/tmp/spdk2.sock 00:32:16.551 14:45:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:32:16.551 14:45:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:32:16.551 14:45:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 75261 /var/tmp/spdk2.sock 00:32:16.551 14:45:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:32:16.551 14:45:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:16.551 14:45:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:32:16.551 14:45:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:16.551 14:45:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 75261 /var/tmp/spdk2.sock 00:32:16.551 14:45:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 75261 ']' 00:32:16.551 14:45:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:32:16.551 14:45:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:16.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:32:16.551 14:45:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:32:16.551 14:45:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:16.551 14:45:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:32:16.551 [2024-07-22 14:45:35.882080] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:32:16.551 [2024-07-22 14:45:35.882158] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75261 ] 00:32:16.551 [2024-07-22 14:45:36.011502] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 75238 has claimed it. 00:32:16.551 [2024-07-22 14:45:36.011559] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:32:17.119 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (75261) - No such process 00:32:17.119 ERROR: process (pid: 75261) is no longer running 00:32:17.119 14:45:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:17.119 14:45:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:32:17.119 14:45:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:32:17.119 14:45:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:17.119 14:45:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:17.119 14:45:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:17.119 14:45:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 75238 00:32:17.119 14:45:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 75238 00:32:17.119 14:45:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:32:17.379 14:45:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 75238 00:32:17.379 14:45:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 75238 ']' 00:32:17.379 14:45:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 75238 00:32:17.379 14:45:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:32:17.379 14:45:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:17.379 14:45:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75238 00:32:17.379 14:45:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:17.379 14:45:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:17.379 killing process with pid 75238 00:32:17.379 14:45:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75238' 00:32:17.379 14:45:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 75238 00:32:17.379 14:45:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 75238 00:32:17.639 00:32:17.639 real 0m2.246s 00:32:17.639 user 0m2.499s 00:32:17.639 sys 0m0.531s 00:32:17.639 14:45:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:17.639 14:45:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:32:17.639 ************************************ 00:32:17.639 END TEST locking_app_on_locked_coremask 00:32:17.639 ************************************ 00:32:17.639 14:45:37 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:32:17.639 14:45:37 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:32:17.639 14:45:37 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:17.639 14:45:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:32:17.639 ************************************ 00:32:17.639 START TEST locking_overlapped_coremask 00:32:17.639 ************************************ 00:32:17.639 14:45:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:32:17.639 14:45:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=75318 00:32:17.639 14:45:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:32:17.639 14:45:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 75318 /var/tmp/spdk.sock 00:32:17.639 14:45:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 75318 ']' 00:32:17.639 14:45:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:17.639 14:45:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:17.639 14:45:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:17.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:17.639 14:45:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:17.639 14:45:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:32:17.898 [2024-07-22 14:45:37.287114] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:32:17.898 [2024-07-22 14:45:37.287199] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75318 ] 00:32:17.898 [2024-07-22 14:45:37.425762] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:17.898 [2024-07-22 14:45:37.479502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:17.898 [2024-07-22 14:45:37.479695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:17.898 [2024-07-22 14:45:37.479745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:18.835 14:45:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:18.835 14:45:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:32:18.835 14:45:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=75348 00:32:18.835 14:45:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:32:18.835 14:45:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 75348 /var/tmp/spdk2.sock 00:32:18.835 14:45:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:32:18.835 14:45:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 75348 /var/tmp/spdk2.sock 00:32:18.835 14:45:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:32:18.835 14:45:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:18.835 14:45:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:32:18.835 14:45:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:18.835 14:45:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 75348 /var/tmp/spdk2.sock 00:32:18.835 14:45:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 75348 ']' 00:32:18.835 14:45:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:32:18.835 14:45:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:18.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:32:18.835 14:45:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:32:18.835 14:45:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:18.835 14:45:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:32:18.835 [2024-07-22 14:45:38.196957] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:32:18.835 [2024-07-22 14:45:38.197403] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75348 ] 00:32:18.835 [2024-07-22 14:45:38.332418] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 75318 has claimed it. 00:32:18.835 [2024-07-22 14:45:38.332478] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:32:19.408 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (75348) - No such process 00:32:19.408 ERROR: process (pid: 75348) is no longer running 00:32:19.408 14:45:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:19.408 14:45:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:32:19.408 14:45:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:32:19.408 14:45:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:19.408 14:45:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:19.408 14:45:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:19.408 14:45:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:32:19.408 14:45:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:32:19.408 14:45:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:32:19.408 14:45:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:32:19.408 14:45:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 75318 00:32:19.408 14:45:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 75318 ']' 00:32:19.408 14:45:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 75318 00:32:19.408 14:45:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:32:19.408 14:45:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:19.408 14:45:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75318 00:32:19.408 14:45:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:19.408 14:45:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:19.408 killing process with pid 75318 00:32:19.408 14:45:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75318' 00:32:19.408 14:45:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 75318 00:32:19.408 14:45:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 75318 00:32:19.673 00:32:19.673 real 0m1.989s 00:32:19.673 user 0m5.486s 00:32:19.673 sys 0m0.389s 00:32:19.673 14:45:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:19.673 14:45:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:32:19.673 ************************************ 00:32:19.673 END TEST locking_overlapped_coremask 00:32:19.673 ************************************ 00:32:19.674 14:45:39 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:32:19.674 14:45:39 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:32:19.674 14:45:39 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:19.674 14:45:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:32:19.674 ************************************ 00:32:19.674 START TEST locking_overlapped_coremask_via_rpc 00:32:19.674 ************************************ 00:32:19.674 14:45:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:32:19.674 14:45:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=75394 00:32:19.674 14:45:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 75394 /var/tmp/spdk.sock 00:32:19.674 14:45:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:32:19.674 14:45:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 75394 ']' 00:32:19.674 14:45:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:19.674 14:45:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:19.674 14:45:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:19.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:19.674 14:45:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:19.674 14:45:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:19.933 [2024-07-22 14:45:39.341170] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:32:19.933 [2024-07-22 14:45:39.341245] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75394 ] 00:32:19.933 [2024-07-22 14:45:39.480059] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:32:19.933 [2024-07-22 14:45:39.480111] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:19.933 [2024-07-22 14:45:39.533306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:19.933 [2024-07-22 14:45:39.533404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:19.933 [2024-07-22 14:45:39.533406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:20.873 14:45:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:20.873 14:45:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:32:20.873 14:45:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=75424 00:32:20.873 14:45:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:32:20.873 14:45:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 75424 /var/tmp/spdk2.sock 00:32:20.873 14:45:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 75424 ']' 00:32:20.873 14:45:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:32:20.873 14:45:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:20.873 14:45:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:32:20.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:32:20.873 14:45:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:20.873 14:45:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:20.873 [2024-07-22 14:45:40.251482] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:32:20.873 [2024-07-22 14:45:40.251551] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75424 ] 00:32:20.873 [2024-07-22 14:45:40.388137] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:32:20.874 [2024-07-22 14:45:40.388184] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:20.874 [2024-07-22 14:45:40.498132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:20.874 [2024-07-22 14:45:40.501770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:20.874 [2024-07-22 14:45:40.501773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:32:21.812 14:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:21.812 14:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:32:21.812 14:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:32:21.812 14:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.812 14:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:21.812 14:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.812 14:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:32:21.812 14:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:32:21.812 14:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:32:21.812 14:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:21.812 14:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:21.812 14:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:21.812 14:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:21.812 14:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:32:21.812 14:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.812 14:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:21.812 [2024-07-22 14:45:41.184822] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 75394 has claimed it. 00:32:21.812 2024/07/22 14:45:41 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:32:21.812 request: 00:32:21.812 { 00:32:21.812 "method": "framework_enable_cpumask_locks", 00:32:21.812 "params": {} 00:32:21.812 } 00:32:21.812 Got JSON-RPC error response 00:32:21.812 GoRPCClient: error on JSON-RPC call 00:32:21.812 14:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:21.812 14:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:32:21.812 14:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:21.812 14:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:21.812 14:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:21.812 14:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 75394 /var/tmp/spdk.sock 00:32:21.812 14:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 75394 ']' 00:32:21.812 14:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:21.812 14:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:21.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:21.812 14:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:21.812 14:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:21.812 14:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:21.812 14:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:21.812 14:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:32:21.812 14:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 75424 /var/tmp/spdk2.sock 00:32:21.812 14:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 75424 ']' 00:32:21.812 14:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:32:21.812 14:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:21.812 14:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:32:21.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:32:21.812 14:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:21.813 14:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:22.071 14:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:22.071 14:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:32:22.071 14:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:32:22.071 14:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:32:22.071 14:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:32:22.071 14:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:32:22.071 00:32:22.071 real 0m2.377s 00:32:22.071 user 0m1.091s 00:32:22.071 sys 0m0.223s 00:32:22.071 14:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:22.071 14:45:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:22.071 ************************************ 00:32:22.071 END TEST locking_overlapped_coremask_via_rpc 00:32:22.071 ************************************ 00:32:22.330 14:45:41 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:32:22.330 14:45:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 75394 ]] 00:32:22.330 14:45:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 75394 00:32:22.330 14:45:41 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 75394 ']' 00:32:22.331 14:45:41 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 75394 00:32:22.331 14:45:41 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:32:22.331 14:45:41 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:22.331 14:45:41 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75394 00:32:22.331 14:45:41 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:22.331 killing process with pid 75394 00:32:22.331 14:45:41 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:22.331 14:45:41 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75394' 00:32:22.331 14:45:41 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 75394 00:32:22.331 14:45:41 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 75394 00:32:22.589 14:45:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 75424 ]] 00:32:22.589 14:45:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 75424 00:32:22.589 14:45:42 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 75424 ']' 00:32:22.589 14:45:42 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 75424 00:32:22.589 14:45:42 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:32:22.589 14:45:42 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:22.589 14:45:42 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75424 00:32:22.589 14:45:42 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:32:22.589 14:45:42 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:32:22.589 killing process with pid 75424 00:32:22.589 14:45:42 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75424' 00:32:22.589 14:45:42 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 75424 00:32:22.589 14:45:42 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 75424 00:32:22.848 14:45:42 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:32:22.848 14:45:42 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:32:22.848 14:45:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 75394 ]] 00:32:22.848 14:45:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 75394 00:32:22.848 14:45:42 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 75394 ']' 00:32:22.848 14:45:42 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 75394 00:32:22.848 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (75394) - No such process 00:32:22.848 Process with pid 75394 is not found 00:32:22.848 14:45:42 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 75394 is not found' 00:32:22.848 14:45:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 75424 ]] 00:32:22.848 14:45:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 75424 00:32:22.848 14:45:42 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 75424 ']' 00:32:22.848 14:45:42 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 75424 00:32:22.848 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (75424) - No such process 00:32:22.848 Process with pid 75424 is not found 00:32:22.848 14:45:42 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 75424 is not found' 00:32:22.848 14:45:42 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:32:22.848 00:32:22.848 real 0m17.637s 00:32:22.848 user 0m30.969s 00:32:22.848 sys 0m4.692s 00:32:22.848 14:45:42 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:22.848 ************************************ 00:32:22.848 END TEST cpu_locks 00:32:22.848 14:45:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:32:22.848 ************************************ 00:32:22.848 00:32:22.848 real 0m43.974s 00:32:22.848 user 1m24.589s 00:32:22.848 sys 0m8.351s 00:32:22.848 14:45:42 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:22.848 14:45:42 event -- common/autotest_common.sh@10 -- # set +x 00:32:22.848 ************************************ 00:32:22.848 END TEST event 00:32:22.848 ************************************ 00:32:23.107 14:45:42 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:32:23.107 14:45:42 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:32:23.107 14:45:42 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:23.107 14:45:42 -- common/autotest_common.sh@10 -- # set +x 00:32:23.107 ************************************ 00:32:23.107 START TEST thread 00:32:23.107 ************************************ 00:32:23.107 14:45:42 thread -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:32:23.107 * Looking for test storage... 00:32:23.107 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:32:23.107 14:45:42 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:32:23.107 14:45:42 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:32:23.107 14:45:42 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:23.107 14:45:42 thread -- common/autotest_common.sh@10 -- # set +x 00:32:23.107 ************************************ 00:32:23.107 START TEST thread_poller_perf 00:32:23.107 ************************************ 00:32:23.107 14:45:42 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:32:23.107 [2024-07-22 14:45:42.667082] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:32:23.107 [2024-07-22 14:45:42.667208] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75567 ] 00:32:23.366 [2024-07-22 14:45:42.794371] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:23.366 [2024-07-22 14:45:42.869180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:23.366 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:32:24.741 ====================================== 00:32:24.741 busy:2296690096 (cyc) 00:32:24.741 total_run_count: 340000 00:32:24.741 tsc_hz: 2290000000 (cyc) 00:32:24.741 ====================================== 00:32:24.741 poller_cost: 6754 (cyc), 2949 (nsec) 00:32:24.741 00:32:24.741 real 0m1.306s 00:32:24.741 user 0m1.145s 00:32:24.741 sys 0m0.054s 00:32:24.741 14:45:43 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:24.741 14:45:43 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:32:24.741 ************************************ 00:32:24.741 END TEST thread_poller_perf 00:32:24.741 ************************************ 00:32:24.741 14:45:43 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:32:24.741 14:45:43 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:32:24.741 14:45:43 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:24.741 14:45:43 thread -- common/autotest_common.sh@10 -- # set +x 00:32:24.741 ************************************ 00:32:24.741 START TEST thread_poller_perf 00:32:24.741 ************************************ 00:32:24.741 14:45:43 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:32:24.741 [2024-07-22 14:45:44.013296] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:32:24.741 [2024-07-22 14:45:44.013418] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75597 ] 00:32:24.741 [2024-07-22 14:45:44.153850] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:24.741 [2024-07-22 14:45:44.209071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:24.741 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:32:25.678 ====================================== 00:32:25.678 busy:2292185606 (cyc) 00:32:25.678 total_run_count: 4458000 00:32:25.678 tsc_hz: 2290000000 (cyc) 00:32:25.678 ====================================== 00:32:25.678 poller_cost: 514 (cyc), 224 (nsec) 00:32:25.678 00:32:25.678 real 0m1.294s 00:32:25.678 user 0m1.139s 00:32:25.678 sys 0m0.049s 00:32:25.678 14:45:45 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:25.678 14:45:45 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:32:25.678 ************************************ 00:32:25.678 END TEST thread_poller_perf 00:32:25.678 ************************************ 00:32:25.935 14:45:45 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:32:25.936 00:32:25.936 real 0m2.802s 00:32:25.936 user 0m2.359s 00:32:25.936 sys 0m0.243s 00:32:25.936 14:45:45 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:25.936 14:45:45 thread -- common/autotest_common.sh@10 -- # set +x 00:32:25.936 ************************************ 00:32:25.936 END TEST thread 00:32:25.936 ************************************ 00:32:25.936 14:45:45 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:32:25.936 14:45:45 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:32:25.936 14:45:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:25.936 14:45:45 -- common/autotest_common.sh@10 -- # set +x 00:32:25.936 ************************************ 00:32:25.936 START TEST accel 00:32:25.936 ************************************ 00:32:25.936 14:45:45 accel -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:32:25.936 * Looking for test storage... 00:32:25.936 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:32:25.936 14:45:45 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:32:25.936 14:45:45 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:32:25.936 14:45:45 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:32:25.936 14:45:45 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=75678 00:32:25.936 14:45:45 accel -- accel/accel.sh@63 -- # waitforlisten 75678 00:32:25.936 14:45:45 accel -- common/autotest_common.sh@827 -- # '[' -z 75678 ']' 00:32:25.936 14:45:45 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:25.936 14:45:45 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:25.936 14:45:45 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:25.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:25.936 14:45:45 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:25.936 14:45:45 accel -- common/autotest_common.sh@10 -- # set +x 00:32:25.936 14:45:45 accel -- accel/accel.sh@61 -- # build_accel_config 00:32:25.936 14:45:45 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:32:25.936 14:45:45 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:32:25.936 14:45:45 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:32:25.936 14:45:45 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:25.936 14:45:45 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:25.936 14:45:45 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:32:25.936 14:45:45 accel -- accel/accel.sh@40 -- # local IFS=, 00:32:25.936 14:45:45 accel -- accel/accel.sh@41 -- # jq -r . 00:32:25.936 [2024-07-22 14:45:45.503643] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:32:25.936 [2024-07-22 14:45:45.503755] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75678 ] 00:32:26.194 [2024-07-22 14:45:45.643623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:26.194 [2024-07-22 14:45:45.699558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:27.128 14:45:46 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:27.128 14:45:46 accel -- common/autotest_common.sh@860 -- # return 0 00:32:27.128 14:45:46 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:32:27.128 14:45:46 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:32:27.128 14:45:46 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:32:27.128 14:45:46 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:32:27.128 14:45:46 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:32:27.128 14:45:46 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:32:27.128 14:45:46 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.128 14:45:46 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:32:27.128 14:45:46 accel -- common/autotest_common.sh@10 -- # set +x 00:32:27.128 14:45:46 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.128 14:45:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:32:27.128 14:45:46 accel -- accel/accel.sh@72 -- # IFS== 00:32:27.128 14:45:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:32:27.128 14:45:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:32:27.128 14:45:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:32:27.128 14:45:46 accel -- accel/accel.sh@72 -- # IFS== 00:32:27.128 14:45:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:32:27.128 14:45:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:32:27.128 14:45:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:32:27.128 14:45:46 accel -- accel/accel.sh@72 -- # IFS== 00:32:27.128 14:45:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:32:27.128 14:45:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:32:27.128 14:45:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:32:27.128 14:45:46 accel -- accel/accel.sh@72 -- # IFS== 00:32:27.128 14:45:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:32:27.128 14:45:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:32:27.128 14:45:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:32:27.128 14:45:46 accel -- accel/accel.sh@72 -- # IFS== 00:32:27.128 14:45:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:32:27.128 14:45:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:32:27.128 14:45:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:32:27.128 14:45:46 accel -- accel/accel.sh@72 -- # IFS== 00:32:27.128 14:45:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:32:27.128 14:45:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:32:27.128 14:45:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:32:27.128 14:45:46 accel -- accel/accel.sh@72 -- # IFS== 00:32:27.128 14:45:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:32:27.128 14:45:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:32:27.128 14:45:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:32:27.128 14:45:46 accel -- accel/accel.sh@72 -- # IFS== 00:32:27.128 14:45:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:32:27.128 14:45:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:32:27.128 14:45:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:32:27.128 14:45:46 accel -- accel/accel.sh@72 -- # IFS== 00:32:27.128 14:45:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:32:27.128 14:45:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:32:27.128 14:45:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:32:27.128 14:45:46 accel -- accel/accel.sh@72 -- # IFS== 00:32:27.128 14:45:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:32:27.128 14:45:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:32:27.128 14:45:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:32:27.128 14:45:46 accel -- accel/accel.sh@72 -- # IFS== 00:32:27.128 14:45:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:32:27.128 14:45:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:32:27.128 14:45:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:32:27.128 14:45:46 accel -- accel/accel.sh@72 -- # IFS== 00:32:27.128 14:45:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:32:27.128 14:45:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:32:27.128 14:45:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:32:27.128 14:45:46 accel -- accel/accel.sh@72 -- # IFS== 00:32:27.128 14:45:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:32:27.128 14:45:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:32:27.128 14:45:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:32:27.128 14:45:46 accel -- accel/accel.sh@72 -- # IFS== 00:32:27.128 14:45:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:32:27.128 14:45:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:32:27.128 14:45:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:32:27.128 14:45:46 accel -- accel/accel.sh@72 -- # IFS== 00:32:27.128 14:45:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:32:27.128 14:45:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:32:27.128 14:45:46 accel -- accel/accel.sh@75 -- # killprocess 75678 00:32:27.128 14:45:46 accel -- common/autotest_common.sh@946 -- # '[' -z 75678 ']' 00:32:27.128 14:45:46 accel -- common/autotest_common.sh@950 -- # kill -0 75678 00:32:27.128 14:45:46 accel -- common/autotest_common.sh@951 -- # uname 00:32:27.128 14:45:46 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:27.128 14:45:46 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75678 00:32:27.128 14:45:46 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:27.128 14:45:46 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:27.128 14:45:46 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75678' 00:32:27.128 killing process with pid 75678 00:32:27.128 14:45:46 accel -- common/autotest_common.sh@965 -- # kill 75678 00:32:27.128 14:45:46 accel -- common/autotest_common.sh@970 -- # wait 75678 00:32:27.388 14:45:46 accel -- accel/accel.sh@76 -- # trap - ERR 00:32:27.388 14:45:46 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:32:27.388 14:45:46 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:27.388 14:45:46 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:27.388 14:45:46 accel -- common/autotest_common.sh@10 -- # set +x 00:32:27.388 14:45:46 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:32:27.388 14:45:46 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:32:27.388 14:45:46 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:32:27.388 14:45:46 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:32:27.388 14:45:46 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:32:27.388 14:45:46 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:27.388 14:45:46 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:27.388 14:45:46 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:32:27.388 14:45:46 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:32:27.388 14:45:46 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:32:27.388 14:45:46 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:27.388 14:45:46 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:32:27.388 14:45:46 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:32:27.388 14:45:46 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:32:27.388 14:45:46 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:27.388 14:45:46 accel -- common/autotest_common.sh@10 -- # set +x 00:32:27.388 ************************************ 00:32:27.388 START TEST accel_missing_filename 00:32:27.388 ************************************ 00:32:27.388 14:45:46 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:32:27.388 14:45:46 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:32:27.388 14:45:46 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:32:27.388 14:45:46 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:32:27.388 14:45:46 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:27.388 14:45:46 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:32:27.388 14:45:46 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:27.388 14:45:46 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:32:27.388 14:45:46 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:32:27.388 14:45:46 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:32:27.388 14:45:46 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:32:27.388 14:45:46 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:32:27.388 14:45:46 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:27.388 14:45:46 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:27.388 14:45:46 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:32:27.388 14:45:46 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:32:27.388 14:45:46 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:32:27.388 [2024-07-22 14:45:46.924446] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:32:27.388 [2024-07-22 14:45:46.924539] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75742 ] 00:32:27.649 [2024-07-22 14:45:47.067088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:27.649 [2024-07-22 14:45:47.122321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:27.649 [2024-07-22 14:45:47.165053] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:27.649 [2024-07-22 14:45:47.226148] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:32:27.909 A filename is required. 00:32:27.909 14:45:47 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:32:27.909 14:45:47 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:27.909 14:45:47 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:32:27.909 14:45:47 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:32:27.909 14:45:47 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:32:27.909 14:45:47 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:27.909 00:32:27.909 real 0m0.410s 00:32:27.909 user 0m0.252s 00:32:27.909 sys 0m0.100s 00:32:27.909 14:45:47 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:27.909 14:45:47 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:32:27.909 ************************************ 00:32:27.909 END TEST accel_missing_filename 00:32:27.909 ************************************ 00:32:27.909 14:45:47 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:32:27.909 14:45:47 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:32:27.909 14:45:47 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:27.909 14:45:47 accel -- common/autotest_common.sh@10 -- # set +x 00:32:27.909 ************************************ 00:32:27.909 START TEST accel_compress_verify 00:32:27.909 ************************************ 00:32:27.909 14:45:47 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:32:27.909 14:45:47 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:32:27.909 14:45:47 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:32:27.909 14:45:47 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:32:27.909 14:45:47 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:27.909 14:45:47 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:32:27.909 14:45:47 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:27.909 14:45:47 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:32:27.909 14:45:47 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:32:27.909 14:45:47 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:32:27.909 14:45:47 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:32:27.909 14:45:47 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:32:27.909 14:45:47 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:27.909 14:45:47 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:27.909 14:45:47 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:32:27.909 14:45:47 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:32:27.909 14:45:47 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:32:27.909 [2024-07-22 14:45:47.363790] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:32:27.909 [2024-07-22 14:45:47.363876] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75767 ] 00:32:27.909 [2024-07-22 14:45:47.505570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:28.169 [2024-07-22 14:45:47.561228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:28.169 [2024-07-22 14:45:47.603878] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:28.169 [2024-07-22 14:45:47.664246] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:32:28.169 00:32:28.169 Compression does not support the verify option, aborting. 00:32:28.169 14:45:47 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:32:28.169 14:45:47 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:28.169 14:45:47 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:32:28.169 14:45:47 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:32:28.169 14:45:47 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:32:28.169 14:45:47 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:28.169 00:32:28.169 real 0m0.398s 00:32:28.169 user 0m0.242s 00:32:28.169 sys 0m0.095s 00:32:28.169 14:45:47 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:28.169 14:45:47 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:32:28.169 ************************************ 00:32:28.169 END TEST accel_compress_verify 00:32:28.169 ************************************ 00:32:28.169 14:45:47 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:32:28.169 14:45:47 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:32:28.169 14:45:47 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:28.169 14:45:47 accel -- common/autotest_common.sh@10 -- # set +x 00:32:28.169 ************************************ 00:32:28.169 START TEST accel_wrong_workload 00:32:28.169 ************************************ 00:32:28.169 14:45:47 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:32:28.169 14:45:47 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:32:28.169 14:45:47 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:32:28.169 14:45:47 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:32:28.169 14:45:47 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:28.169 14:45:47 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:32:28.169 14:45:47 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:28.169 14:45:47 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:32:28.169 14:45:47 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:32:28.169 14:45:47 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:32:28.169 14:45:47 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:32:28.169 14:45:47 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:32:28.169 14:45:47 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:28.169 14:45:47 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:28.169 14:45:47 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:32:28.169 14:45:47 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:32:28.169 14:45:47 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:32:28.428 Unsupported workload type: foobar 00:32:28.428 [2024-07-22 14:45:47.822082] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:32:28.428 accel_perf options: 00:32:28.428 [-h help message] 00:32:28.428 [-q queue depth per core] 00:32:28.428 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:32:28.428 [-T number of threads per core 00:32:28.428 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:32:28.428 [-t time in seconds] 00:32:28.428 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:32:28.428 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:32:28.428 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:32:28.428 [-l for compress/decompress workloads, name of uncompressed input file 00:32:28.429 [-S for crc32c workload, use this seed value (default 0) 00:32:28.429 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:32:28.429 [-f for fill workload, use this BYTE value (default 255) 00:32:28.429 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:32:28.429 [-y verify result if this switch is on] 00:32:28.429 [-a tasks to allocate per core (default: same value as -q)] 00:32:28.429 Can be used to spread operations across a wider range of memory. 00:32:28.429 14:45:47 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:32:28.429 14:45:47 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:28.429 14:45:47 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:28.429 14:45:47 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:28.429 00:32:28.429 real 0m0.036s 00:32:28.429 user 0m0.015s 00:32:28.429 sys 0m0.020s 00:32:28.429 14:45:47 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:28.429 14:45:47 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:32:28.429 ************************************ 00:32:28.429 END TEST accel_wrong_workload 00:32:28.429 ************************************ 00:32:28.429 14:45:47 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:32:28.429 14:45:47 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:32:28.429 14:45:47 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:28.429 14:45:47 accel -- common/autotest_common.sh@10 -- # set +x 00:32:28.429 ************************************ 00:32:28.429 START TEST accel_negative_buffers 00:32:28.429 ************************************ 00:32:28.429 14:45:47 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:32:28.429 14:45:47 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:32:28.429 14:45:47 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:32:28.429 14:45:47 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:32:28.429 14:45:47 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:28.429 14:45:47 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:32:28.429 14:45:47 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:28.429 14:45:47 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:32:28.429 14:45:47 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:32:28.429 14:45:47 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:32:28.429 14:45:47 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:32:28.429 14:45:47 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:32:28.429 14:45:47 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:28.429 14:45:47 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:28.429 14:45:47 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:32:28.429 14:45:47 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:32:28.429 14:45:47 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:32:28.429 -x option must be non-negative. 00:32:28.429 [2024-07-22 14:45:47.885723] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:32:28.429 accel_perf options: 00:32:28.429 [-h help message] 00:32:28.429 [-q queue depth per core] 00:32:28.429 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:32:28.429 [-T number of threads per core 00:32:28.429 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:32:28.429 [-t time in seconds] 00:32:28.429 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:32:28.429 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:32:28.429 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:32:28.429 [-l for compress/decompress workloads, name of uncompressed input file 00:32:28.429 [-S for crc32c workload, use this seed value (default 0) 00:32:28.429 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:32:28.429 [-f for fill workload, use this BYTE value (default 255) 00:32:28.429 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:32:28.429 [-y verify result if this switch is on] 00:32:28.429 [-a tasks to allocate per core (default: same value as -q)] 00:32:28.429 Can be used to spread operations across a wider range of memory. 00:32:28.429 14:45:47 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:32:28.429 14:45:47 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:28.429 14:45:47 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:28.429 14:45:47 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:28.429 00:32:28.429 real 0m0.024s 00:32:28.429 user 0m0.010s 00:32:28.429 sys 0m0.014s 00:32:28.429 14:45:47 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:28.429 14:45:47 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:32:28.429 ************************************ 00:32:28.429 END TEST accel_negative_buffers 00:32:28.429 ************************************ 00:32:28.429 14:45:47 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:32:28.429 14:45:47 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:32:28.429 14:45:47 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:28.429 14:45:47 accel -- common/autotest_common.sh@10 -- # set +x 00:32:28.429 ************************************ 00:32:28.429 START TEST accel_crc32c 00:32:28.429 ************************************ 00:32:28.429 14:45:47 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:32:28.429 14:45:47 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:32:28.429 14:45:47 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:32:28.429 14:45:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:32:28.429 14:45:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:32:28.429 14:45:47 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:32:28.429 14:45:47 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:32:28.429 14:45:47 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:32:28.429 14:45:47 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:32:28.429 14:45:47 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:32:28.429 14:45:47 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:28.429 14:45:47 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:28.429 14:45:47 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:32:28.429 14:45:47 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:32:28.429 14:45:47 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:32:28.429 [2024-07-22 14:45:47.966492] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:32:28.429 [2024-07-22 14:45:47.966608] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75825 ] 00:32:28.697 [2024-07-22 14:45:48.106244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:28.697 [2024-07-22 14:45:48.163039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:28.697 14:45:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:32:28.697 14:45:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:32:28.697 14:45:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:32:28.697 14:45:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:32:28.697 14:45:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:32:28.697 14:45:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:32:28.697 14:45:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:32:28.697 14:45:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:32:28.697 14:45:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:32:28.697 14:45:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:32:28.697 14:45:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:32:28.697 14:45:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:32:28.697 14:45:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:32:28.697 14:45:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:32:28.697 14:45:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:32:28.697 14:45:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:32:28.697 14:45:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:32:28.697 14:45:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:32:28.697 14:45:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:32:28.697 14:45:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:32:28.697 14:45:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:32:28.697 14:45:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:32:28.697 14:45:48 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:32:28.697 14:45:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:32:28.697 14:45:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:32:28.697 14:45:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:32:28.697 14:45:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:32:28.697 14:45:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:32:28.697 14:45:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:32:28.697 14:45:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:32:28.697 14:45:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:32:28.697 14:45:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:32:28.697 14:45:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:32:28.697 14:45:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:32:28.697 14:45:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:32:28.697 14:45:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:32:28.697 14:45:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:32:28.697 14:45:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:32:28.697 14:45:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:32:28.697 14:45:48 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:32:28.697 14:45:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:32:28.697 14:45:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:32:28.697 14:45:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:32:28.697 14:45:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:32:28.697 14:45:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:32:28.697 14:45:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:32:28.697 14:45:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:32:28.697 14:45:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:32:28.697 14:45:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:32:28.698 14:45:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:32:28.698 14:45:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:32:28.698 14:45:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:32:28.698 14:45:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:32:28.698 14:45:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:32:28.698 14:45:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:32:28.698 14:45:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:32:28.698 14:45:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:32:28.698 14:45:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:32:28.698 14:45:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:32:28.698 14:45:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:32:28.698 14:45:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:32:28.698 14:45:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:32:28.698 14:45:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:32:28.698 14:45:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:32:28.698 14:45:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:32:28.698 14:45:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:32:28.698 14:45:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:32:28.698 14:45:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:32:28.698 14:45:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:32:28.698 14:45:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:32:30.085 14:45:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:32:30.085 14:45:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:32:30.085 14:45:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:32:30.085 14:45:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:32:30.085 14:45:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:32:30.085 14:45:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:32:30.085 14:45:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:32:30.085 14:45:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:32:30.085 14:45:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:32:30.085 14:45:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:32:30.085 14:45:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:32:30.085 14:45:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:32:30.085 14:45:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:32:30.085 14:45:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:32:30.085 14:45:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:32:30.085 14:45:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:32:30.085 14:45:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:32:30.085 14:45:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:32:30.085 14:45:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:32:30.085 14:45:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:32:30.085 14:45:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:32:30.085 14:45:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:32:30.085 14:45:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:32:30.085 14:45:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:32:30.085 14:45:49 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:32:30.085 14:45:49 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:32:30.085 14:45:49 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:30.085 00:32:30.085 real 0m1.404s 00:32:30.085 user 0m0.010s 00:32:30.085 sys 0m0.000s 00:32:30.085 14:45:49 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:30.085 14:45:49 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:32:30.085 ************************************ 00:32:30.085 END TEST accel_crc32c 00:32:30.085 ************************************ 00:32:30.085 14:45:49 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:32:30.085 14:45:49 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:32:30.085 14:45:49 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:30.085 14:45:49 accel -- common/autotest_common.sh@10 -- # set +x 00:32:30.085 ************************************ 00:32:30.085 START TEST accel_crc32c_C2 00:32:30.085 ************************************ 00:32:30.085 14:45:49 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:32:30.085 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:32:30.085 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:32:30.085 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:32:30.085 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:32:30.085 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:32:30.085 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:32:30.085 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:32:30.085 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:32:30.085 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:32:30.085 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:30.085 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:30.085 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:32:30.085 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:32:30.085 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:32:30.085 [2024-07-22 14:45:49.395055] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:32:30.085 [2024-07-22 14:45:49.395128] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75860 ] 00:32:30.085 [2024-07-22 14:45:49.532591] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:30.085 [2024-07-22 14:45:49.586512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:30.085 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:32:30.085 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:32:30.085 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:32:30.085 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:32:30.085 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:32:30.085 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:32:30.085 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:32:30.085 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:32:30.085 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:32:30.085 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:32:30.085 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:32:30.085 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:32:30.085 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:32:30.085 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:32:30.085 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:32:30.085 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:32:30.085 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:32:30.085 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:32:30.085 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:32:30.085 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:32:30.085 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:32:30.085 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:32:30.085 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:32:30.085 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:32:30.085 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:32:30.085 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:32:30.085 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:32:30.085 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:32:30.086 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:32:30.086 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:32:30.086 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:32:30.086 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:32:30.086 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:32:30.086 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:32:30.086 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:32:30.086 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:32:30.086 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:32:30.086 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:32:30.086 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:32:30.086 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:32:30.086 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:32:30.086 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:32:30.086 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:32:30.086 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:32:30.086 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:32:30.086 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:32:30.086 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:32:30.086 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:32:30.086 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:32:30.086 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:32:30.086 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:32:30.086 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:32:30.086 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:32:30.086 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:32:30.086 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:32:30.086 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:32:30.086 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:32:30.086 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:32:30.086 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:32:30.086 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:32:30.086 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:32:30.086 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:32:30.086 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:32:30.086 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:32:30.086 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:32:30.086 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:32:30.086 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:32:30.086 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:32:30.086 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:32:30.086 14:45:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:32:31.458 14:45:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:32:31.458 14:45:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:32:31.458 14:45:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:32:31.458 14:45:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:32:31.458 14:45:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:32:31.458 14:45:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:32:31.458 14:45:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:32:31.458 14:45:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:32:31.458 14:45:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:32:31.458 14:45:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:32:31.458 14:45:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:32:31.458 14:45:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:32:31.458 14:45:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:32:31.459 14:45:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:32:31.459 14:45:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:32:31.459 14:45:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:32:31.459 14:45:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:32:31.459 14:45:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:32:31.459 14:45:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:32:31.459 14:45:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:32:31.459 14:45:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:32:31.459 14:45:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:32:31.459 14:45:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:32:31.459 14:45:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:32:31.459 14:45:50 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:32:31.459 14:45:50 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:32:31.459 14:45:50 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:31.459 00:32:31.459 real 0m1.390s 00:32:31.459 user 0m0.012s 00:32:31.459 sys 0m0.000s 00:32:31.459 14:45:50 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:31.459 14:45:50 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:32:31.459 ************************************ 00:32:31.459 END TEST accel_crc32c_C2 00:32:31.459 ************************************ 00:32:31.459 14:45:50 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:32:31.459 14:45:50 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:32:31.459 14:45:50 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:31.459 14:45:50 accel -- common/autotest_common.sh@10 -- # set +x 00:32:31.459 ************************************ 00:32:31.459 START TEST accel_copy 00:32:31.459 ************************************ 00:32:31.459 14:45:50 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:32:31.459 14:45:50 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:32:31.459 14:45:50 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:32:31.459 14:45:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:32:31.459 14:45:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:32:31.459 14:45:50 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:32:31.459 14:45:50 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:32:31.459 14:45:50 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:32:31.459 14:45:50 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:32:31.459 14:45:50 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:32:31.459 14:45:50 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:31.459 14:45:50 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:31.459 14:45:50 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:32:31.459 14:45:50 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:32:31.459 14:45:50 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:32:31.459 [2024-07-22 14:45:50.849185] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:32:31.459 [2024-07-22 14:45:50.849284] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75894 ] 00:32:31.459 [2024-07-22 14:45:50.990913] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:31.459 [2024-07-22 14:45:51.046461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:32:31.718 14:45:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:32:32.655 14:45:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:32:32.655 14:45:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:32:32.655 14:45:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:32:32.655 14:45:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:32:32.655 14:45:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:32:32.655 14:45:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:32:32.655 14:45:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:32:32.655 14:45:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:32:32.655 14:45:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:32:32.655 14:45:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:32:32.655 14:45:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:32:32.655 14:45:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:32:32.655 14:45:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:32:32.655 14:45:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:32:32.655 14:45:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:32:32.655 14:45:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:32:32.655 14:45:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:32:32.655 14:45:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:32:32.655 14:45:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:32:32.655 14:45:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:32:32.655 14:45:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:32:32.655 14:45:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:32:32.655 14:45:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:32:32.655 14:45:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:32:32.655 ************************************ 00:32:32.655 END TEST accel_copy 00:32:32.655 ************************************ 00:32:32.655 14:45:52 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:32:32.655 14:45:52 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:32:32.655 14:45:52 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:32.655 00:32:32.655 real 0m1.405s 00:32:32.655 user 0m0.010s 00:32:32.655 sys 0m0.001s 00:32:32.655 14:45:52 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:32.655 14:45:52 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:32:32.655 14:45:52 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:32:32.655 14:45:52 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:32:32.655 14:45:52 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:32.655 14:45:52 accel -- common/autotest_common.sh@10 -- # set +x 00:32:32.655 ************************************ 00:32:32.655 START TEST accel_fill 00:32:32.655 ************************************ 00:32:32.655 14:45:52 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:32:32.655 14:45:52 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:32:32.655 14:45:52 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:32:32.655 14:45:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:32:32.655 14:45:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:32:32.655 14:45:52 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:32:32.655 14:45:52 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:32:32.655 14:45:52 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:32:32.655 14:45:52 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:32:32.655 14:45:52 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:32:32.655 14:45:52 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:32.655 14:45:52 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:32.655 14:45:52 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:32:32.655 14:45:52 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:32:32.655 14:45:52 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:32:32.914 [2024-07-22 14:45:52.305884] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:32:32.914 [2024-07-22 14:45:52.305976] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75923 ] 00:32:32.914 [2024-07-22 14:45:52.443347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:32.914 [2024-07-22 14:45:52.499380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:32:33.173 14:45:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:32:34.108 14:45:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:32:34.108 14:45:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:32:34.108 14:45:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:32:34.108 14:45:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:32:34.108 14:45:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:32:34.108 14:45:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:32:34.108 14:45:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:32:34.108 14:45:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:32:34.108 14:45:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:32:34.108 14:45:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:32:34.108 14:45:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:32:34.108 14:45:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:32:34.108 14:45:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:32:34.108 14:45:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:32:34.108 14:45:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:32:34.108 14:45:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:32:34.108 14:45:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:32:34.108 14:45:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:32:34.108 14:45:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:32:34.108 14:45:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:32:34.108 14:45:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:32:34.108 14:45:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:32:34.108 14:45:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:32:34.108 14:45:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:32:34.108 14:45:53 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:32:34.108 14:45:53 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:32:34.108 14:45:53 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:34.108 00:32:34.108 real 0m1.404s 00:32:34.108 user 0m1.208s 00:32:34.108 sys 0m0.097s 00:32:34.108 14:45:53 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:34.108 14:45:53 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:32:34.108 ************************************ 00:32:34.108 END TEST accel_fill 00:32:34.108 ************************************ 00:32:34.108 14:45:53 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:32:34.108 14:45:53 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:32:34.108 14:45:53 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:34.108 14:45:53 accel -- common/autotest_common.sh@10 -- # set +x 00:32:34.108 ************************************ 00:32:34.108 START TEST accel_copy_crc32c 00:32:34.108 ************************************ 00:32:34.108 14:45:53 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:32:34.108 14:45:53 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:32:34.108 14:45:53 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:32:34.108 14:45:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:32:34.108 14:45:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:32:34.108 14:45:53 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:32:34.108 14:45:53 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:32:34.108 14:45:53 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:32:34.108 14:45:53 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:32:34.108 14:45:53 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:32:34.108 14:45:53 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:34.108 14:45:53 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:34.108 14:45:53 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:32:34.108 14:45:53 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:32:34.108 14:45:53 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:32:34.367 [2024-07-22 14:45:53.757328] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:32:34.367 [2024-07-22 14:45:53.757422] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75958 ] 00:32:34.367 [2024-07-22 14:45:53.899596] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:34.367 [2024-07-22 14:45:53.955624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:34.626 14:45:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:32:34.626 14:45:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:32:34.626 14:45:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:32:34.626 14:45:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:32:34.626 14:45:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:32:34.626 14:45:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:32:34.626 14:45:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:32:34.626 14:45:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:32:34.626 14:45:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:32:34.626 14:45:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:32:34.626 14:45:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:32:34.626 14:45:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:32:34.626 14:45:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:32:34.626 14:45:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:32:34.626 14:45:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:32:34.626 14:45:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:32:34.626 14:45:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:32:34.626 14:45:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:32:34.626 14:45:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:32:34.626 14:45:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:32:34.626 14:45:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:32:34.626 14:45:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:32:34.626 14:45:53 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:32:34.626 14:45:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:32:34.626 14:45:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:32:34.626 14:45:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:32:34.626 14:45:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:32:34.626 14:45:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:32:34.626 14:45:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:32:34.626 14:45:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:32:34.626 14:45:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:32:34.626 14:45:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:32:34.626 14:45:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:32:34.626 14:45:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:32:34.626 14:45:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:32:34.626 14:45:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:32:34.626 14:45:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:32:34.626 14:45:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:32:34.626 14:45:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:32:34.626 14:45:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:32:34.626 14:45:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:32:34.626 14:45:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:32:34.626 14:45:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:32:34.626 14:45:54 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:32:34.626 14:45:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:32:34.626 14:45:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:32:34.626 14:45:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:32:34.626 14:45:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:32:34.626 14:45:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:32:34.626 14:45:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:32:34.626 14:45:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:32:34.626 14:45:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:32:34.626 14:45:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:32:34.626 14:45:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:32:34.626 14:45:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:32:34.627 14:45:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:32:34.627 14:45:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:32:34.627 14:45:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:32:34.627 14:45:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:32:34.627 14:45:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:32:34.627 14:45:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:32:34.627 14:45:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:32:34.627 14:45:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:32:34.627 14:45:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:32:34.627 14:45:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:32:34.627 14:45:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:32:34.627 14:45:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:32:34.627 14:45:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:32:34.627 14:45:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:32:34.627 14:45:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:32:34.627 14:45:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:32:34.627 14:45:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:32:34.627 14:45:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:32:34.627 14:45:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:32:35.580 14:45:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:32:35.580 14:45:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:32:35.580 14:45:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:32:35.580 14:45:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:32:35.580 14:45:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:32:35.580 14:45:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:32:35.580 14:45:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:32:35.580 14:45:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:32:35.580 14:45:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:32:35.580 14:45:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:32:35.580 14:45:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:32:35.580 14:45:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:32:35.580 14:45:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:32:35.580 14:45:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:32:35.580 14:45:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:32:35.580 14:45:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:32:35.580 14:45:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:32:35.580 14:45:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:32:35.580 14:45:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:32:35.580 14:45:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:32:35.580 14:45:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:32:35.580 14:45:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:32:35.580 14:45:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:32:35.580 14:45:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:32:35.580 14:45:55 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:32:35.580 14:45:55 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:32:35.580 14:45:55 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:35.580 00:32:35.580 real 0m1.413s 00:32:35.580 user 0m1.221s 00:32:35.580 sys 0m0.090s 00:32:35.580 14:45:55 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:35.580 14:45:55 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:32:35.580 ************************************ 00:32:35.580 END TEST accel_copy_crc32c 00:32:35.580 ************************************ 00:32:35.580 14:45:55 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:32:35.580 14:45:55 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:32:35.580 14:45:55 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:35.580 14:45:55 accel -- common/autotest_common.sh@10 -- # set +x 00:32:35.580 ************************************ 00:32:35.580 START TEST accel_copy_crc32c_C2 00:32:35.580 ************************************ 00:32:35.580 14:45:55 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:32:35.580 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:32:35.580 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:32:35.580 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:32:35.580 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:32:35.580 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:32:35.580 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:32:35.580 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:32:35.580 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:32:35.580 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:32:35.580 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:35.580 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:35.580 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:32:35.580 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:32:35.580 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:32:35.839 [2024-07-22 14:45:55.227467] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:32:35.839 [2024-07-22 14:45:55.227696] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75992 ] 00:32:35.839 [2024-07-22 14:45:55.366414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:35.839 [2024-07-22 14:45:55.421786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:35.839 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:32:35.839 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:32:35.839 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:32:35.839 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:32:35.839 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:32:35.839 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:32:35.839 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:32:35.839 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:32:35.839 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:32:35.839 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:32:35.839 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:32:35.839 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:32:35.839 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:32:35.839 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:32:35.839 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:32:35.839 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:32:36.099 14:45:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:32:37.037 14:45:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:32:37.037 14:45:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:32:37.037 14:45:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:32:37.037 14:45:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:32:37.037 14:45:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:32:37.037 14:45:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:32:37.037 14:45:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:32:37.037 14:45:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:32:37.037 14:45:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:32:37.037 14:45:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:32:37.037 14:45:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:32:37.037 14:45:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:32:37.037 14:45:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:32:37.037 14:45:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:32:37.037 14:45:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:32:37.037 14:45:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:32:37.037 14:45:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:32:37.037 14:45:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:32:37.037 14:45:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:32:37.037 14:45:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:32:37.037 14:45:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:32:37.037 14:45:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:32:37.037 14:45:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:32:37.037 14:45:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:32:37.037 14:45:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:32:37.037 14:45:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:32:37.037 14:45:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:37.037 00:32:37.037 real 0m1.409s 00:32:37.037 user 0m1.228s 00:32:37.037 sys 0m0.091s 00:32:37.037 14:45:56 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:37.037 14:45:56 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:32:37.037 ************************************ 00:32:37.037 END TEST accel_copy_crc32c_C2 00:32:37.037 ************************************ 00:32:37.037 14:45:56 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:32:37.037 14:45:56 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:32:37.037 14:45:56 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:37.037 14:45:56 accel -- common/autotest_common.sh@10 -- # set +x 00:32:37.037 ************************************ 00:32:37.037 START TEST accel_dualcast 00:32:37.037 ************************************ 00:32:37.037 14:45:56 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:32:37.037 14:45:56 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:32:37.037 14:45:56 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:32:37.037 14:45:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:32:37.037 14:45:56 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:32:37.038 14:45:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:32:37.038 14:45:56 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:32:37.038 14:45:56 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:32:37.038 14:45:56 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:32:37.038 14:45:56 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:32:37.038 14:45:56 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:37.038 14:45:56 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:37.038 14:45:56 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:32:37.038 14:45:56 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:32:37.038 14:45:56 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:32:37.297 [2024-07-22 14:45:56.690825] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:32:37.297 [2024-07-22 14:45:56.691009] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76027 ] 00:32:37.297 [2024-07-22 14:45:56.831982] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:37.297 [2024-07-22 14:45:56.887058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:32:37.557 14:45:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:32:38.504 14:45:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:32:38.504 14:45:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:32:38.504 14:45:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:32:38.504 14:45:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:32:38.504 14:45:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:32:38.504 14:45:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:32:38.504 14:45:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:32:38.504 14:45:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:32:38.504 14:45:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:32:38.504 14:45:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:32:38.504 14:45:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:32:38.504 14:45:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:32:38.504 14:45:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:32:38.504 14:45:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:32:38.504 14:45:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:32:38.504 14:45:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:32:38.504 14:45:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:32:38.504 14:45:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:32:38.504 14:45:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:32:38.504 14:45:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:32:38.504 14:45:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:32:38.504 14:45:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:32:38.504 14:45:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:32:38.504 14:45:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:32:38.504 14:45:58 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:32:38.504 14:45:58 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:32:38.504 14:45:58 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:38.504 00:32:38.504 real 0m1.412s 00:32:38.504 user 0m1.229s 00:32:38.504 sys 0m0.095s 00:32:38.505 14:45:58 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:38.505 14:45:58 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:32:38.505 ************************************ 00:32:38.505 END TEST accel_dualcast 00:32:38.505 ************************************ 00:32:38.505 14:45:58 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:32:38.505 14:45:58 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:32:38.505 14:45:58 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:38.505 14:45:58 accel -- common/autotest_common.sh@10 -- # set +x 00:32:38.505 ************************************ 00:32:38.505 START TEST accel_compare 00:32:38.505 ************************************ 00:32:38.505 14:45:58 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:32:38.505 14:45:58 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:32:38.505 14:45:58 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:32:38.505 14:45:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:32:38.505 14:45:58 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:32:38.505 14:45:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:32:38.505 14:45:58 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:32:38.505 14:45:58 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:32:38.505 14:45:58 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:32:38.505 14:45:58 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:32:38.505 14:45:58 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:38.505 14:45:58 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:38.505 14:45:58 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:32:38.505 14:45:58 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:32:38.505 14:45:58 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:32:38.774 [2024-07-22 14:45:58.155250] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:32:38.774 [2024-07-22 14:45:58.155353] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76060 ] 00:32:38.774 [2024-07-22 14:45:58.294738] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:38.774 [2024-07-22 14:45:58.349942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:38.774 14:45:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:32:38.774 14:45:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:32:38.774 14:45:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:32:38.774 14:45:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:32:38.774 14:45:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:32:38.774 14:45:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:32:38.774 14:45:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:32:38.774 14:45:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:32:38.774 14:45:58 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:32:38.774 14:45:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:32:38.774 14:45:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:32:38.774 14:45:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:32:38.774 14:45:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:32:38.774 14:45:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:32:38.774 14:45:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:32:38.774 14:45:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:32:38.775 14:45:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:32:38.775 14:45:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:32:38.775 14:45:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:32:38.775 14:45:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:32:39.050 14:45:58 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:32:39.050 14:45:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:32:39.050 14:45:58 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:32:39.050 14:45:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:32:39.050 14:45:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:32:39.050 14:45:58 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:32:39.050 14:45:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:32:39.050 14:45:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:32:39.050 14:45:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:32:39.050 14:45:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:32:39.050 14:45:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:32:39.050 14:45:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:32:39.050 14:45:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:32:39.050 14:45:58 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:32:39.050 14:45:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:32:39.050 14:45:58 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:32:39.050 14:45:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:32:39.050 14:45:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:32:39.050 14:45:58 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:32:39.050 14:45:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:32:39.050 14:45:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:32:39.050 14:45:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:32:39.050 14:45:58 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:32:39.050 14:45:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:32:39.050 14:45:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:32:39.050 14:45:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:32:39.050 14:45:58 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:32:39.050 14:45:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:32:39.050 14:45:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:32:39.050 14:45:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:32:39.050 14:45:58 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:32:39.050 14:45:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:32:39.050 14:45:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:32:39.050 14:45:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:32:39.050 14:45:58 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:32:39.050 14:45:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:32:39.050 14:45:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:32:39.050 14:45:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:32:39.050 14:45:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:32:39.050 14:45:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:32:39.050 14:45:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:32:39.050 14:45:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:32:39.050 14:45:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:32:39.050 14:45:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:32:39.050 14:45:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:32:39.050 14:45:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:32:39.989 14:45:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:32:39.989 14:45:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:32:39.989 14:45:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:32:39.989 14:45:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:32:39.989 14:45:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:32:39.989 14:45:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:32:39.989 14:45:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:32:39.989 14:45:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:32:39.989 14:45:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:32:39.989 14:45:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:32:39.989 14:45:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:32:39.989 14:45:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:32:39.989 14:45:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:32:39.989 14:45:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:32:39.989 14:45:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:32:39.989 14:45:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:32:39.989 14:45:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:32:39.989 14:45:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:32:39.989 14:45:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:32:39.989 14:45:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:32:39.989 14:45:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:32:39.989 14:45:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:32:39.989 14:45:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:32:39.989 14:45:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:32:39.989 14:45:59 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:32:39.989 14:45:59 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:32:39.989 14:45:59 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:39.989 00:32:39.990 real 0m1.409s 00:32:39.990 user 0m1.234s 00:32:39.990 sys 0m0.089s 00:32:39.990 14:45:59 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:39.990 14:45:59 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:32:39.990 ************************************ 00:32:39.990 END TEST accel_compare 00:32:39.990 ************************************ 00:32:39.990 14:45:59 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:32:39.990 14:45:59 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:32:39.990 14:45:59 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:39.990 14:45:59 accel -- common/autotest_common.sh@10 -- # set +x 00:32:39.990 ************************************ 00:32:39.990 START TEST accel_xor 00:32:39.990 ************************************ 00:32:39.990 14:45:59 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:32:39.990 14:45:59 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:32:39.990 14:45:59 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:32:39.990 14:45:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:32:39.990 14:45:59 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:32:39.990 14:45:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:32:39.990 14:45:59 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:32:39.990 14:45:59 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:32:39.990 14:45:59 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:32:39.990 14:45:59 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:32:39.990 14:45:59 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:39.990 14:45:59 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:39.990 14:45:59 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:32:39.990 14:45:59 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:32:39.990 14:45:59 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:32:39.990 [2024-07-22 14:45:59.618728] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:32:39.990 [2024-07-22 14:45:59.618817] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76096 ] 00:32:40.250 [2024-07-22 14:45:59.759373] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:40.250 [2024-07-22 14:45:59.814456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:32:40.250 14:45:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:32:41.629 14:46:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:32:41.629 14:46:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:32:41.629 14:46:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:32:41.629 14:46:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:32:41.629 14:46:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:32:41.629 14:46:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:32:41.629 14:46:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:32:41.629 14:46:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:32:41.629 14:46:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:32:41.629 14:46:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:32:41.629 14:46:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:32:41.629 14:46:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:32:41.629 14:46:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:32:41.629 14:46:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:32:41.629 14:46:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:32:41.629 14:46:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:32:41.629 14:46:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:32:41.629 14:46:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:32:41.629 14:46:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:32:41.629 14:46:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:32:41.629 14:46:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:32:41.629 14:46:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:32:41.629 14:46:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:32:41.629 14:46:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:32:41.629 14:46:00 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:32:41.629 14:46:00 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:32:41.629 14:46:00 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:41.629 00:32:41.629 real 0m1.409s 00:32:41.629 user 0m1.219s 00:32:41.629 sys 0m0.102s 00:32:41.629 14:46:00 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:41.629 14:46:00 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:32:41.629 ************************************ 00:32:41.629 END TEST accel_xor 00:32:41.629 ************************************ 00:32:41.629 14:46:01 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:32:41.629 14:46:01 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:32:41.629 14:46:01 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:41.629 14:46:01 accel -- common/autotest_common.sh@10 -- # set +x 00:32:41.629 ************************************ 00:32:41.629 START TEST accel_xor 00:32:41.629 ************************************ 00:32:41.629 14:46:01 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:32:41.629 14:46:01 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:32:41.629 14:46:01 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:32:41.629 14:46:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:32:41.629 14:46:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:32:41.629 14:46:01 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:32:41.629 14:46:01 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:32:41.629 14:46:01 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:32:41.629 14:46:01 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:32:41.629 14:46:01 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:32:41.629 14:46:01 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:41.629 14:46:01 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:41.629 14:46:01 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:32:41.629 14:46:01 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:32:41.629 14:46:01 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:32:41.629 [2024-07-22 14:46:01.066977] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:32:41.629 [2024-07-22 14:46:01.067063] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76125 ] 00:32:41.629 [2024-07-22 14:46:01.206908] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:41.888 [2024-07-22 14:46:01.262449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:41.888 14:46:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:32:41.888 14:46:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:32:41.888 14:46:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:32:41.888 14:46:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:32:41.888 14:46:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:32:41.888 14:46:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:32:41.889 14:46:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:32:42.824 14:46:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:32:42.824 14:46:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:32:42.824 14:46:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:32:42.824 14:46:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:32:42.824 14:46:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:32:42.824 14:46:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:32:42.824 14:46:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:32:42.824 14:46:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:32:42.824 14:46:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:32:42.824 14:46:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:32:42.824 14:46:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:32:42.824 14:46:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:32:42.824 14:46:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:32:42.824 14:46:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:32:42.824 14:46:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:32:42.824 14:46:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:32:42.824 14:46:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:32:42.824 14:46:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:32:42.824 14:46:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:32:42.824 14:46:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:32:42.824 14:46:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:32:42.824 14:46:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:32:42.824 14:46:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:32:42.824 14:46:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:32:42.824 14:46:02 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:32:42.824 14:46:02 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:32:42.824 14:46:02 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:42.824 00:32:42.824 real 0m1.408s 00:32:42.824 user 0m1.214s 00:32:42.824 sys 0m0.109s 00:32:42.824 14:46:02 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:42.824 14:46:02 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:32:42.824 ************************************ 00:32:42.824 END TEST accel_xor 00:32:42.824 ************************************ 00:32:43.083 14:46:02 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:32:43.084 14:46:02 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:32:43.084 14:46:02 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:43.084 14:46:02 accel -- common/autotest_common.sh@10 -- # set +x 00:32:43.084 ************************************ 00:32:43.084 START TEST accel_dif_verify 00:32:43.084 ************************************ 00:32:43.084 14:46:02 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:32:43.084 14:46:02 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:32:43.084 14:46:02 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:32:43.084 14:46:02 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:32:43.084 14:46:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:32:43.084 14:46:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:32:43.084 14:46:02 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:32:43.084 14:46:02 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:32:43.084 14:46:02 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:32:43.084 14:46:02 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:32:43.084 14:46:02 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:43.084 14:46:02 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:43.084 14:46:02 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:32:43.084 14:46:02 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:32:43.084 14:46:02 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:32:43.084 [2024-07-22 14:46:02.515983] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:32:43.084 [2024-07-22 14:46:02.516077] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76159 ] 00:32:43.084 [2024-07-22 14:46:02.661979] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:43.343 [2024-07-22 14:46:02.724251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:32:43.343 14:46:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:32:43.344 14:46:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:32:43.344 14:46:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:32:43.344 14:46:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:32:44.279 14:46:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:32:44.279 14:46:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:32:44.279 14:46:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:32:44.279 14:46:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:32:44.279 14:46:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:32:44.279 14:46:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:32:44.279 14:46:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:32:44.279 14:46:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:32:44.279 14:46:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:32:44.279 14:46:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:32:44.279 14:46:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:32:44.279 14:46:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:32:44.279 14:46:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:32:44.279 14:46:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:32:44.279 14:46:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:32:44.279 14:46:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:32:44.279 14:46:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:32:44.279 14:46:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:32:44.279 14:46:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:32:44.279 14:46:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:32:44.279 14:46:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:32:44.279 14:46:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:32:44.279 14:46:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:32:44.279 14:46:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:32:44.279 14:46:03 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:32:44.538 14:46:03 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:32:44.538 14:46:03 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:44.538 00:32:44.538 real 0m1.415s 00:32:44.538 user 0m1.216s 00:32:44.538 sys 0m0.105s 00:32:44.538 14:46:03 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:44.538 14:46:03 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:32:44.538 ************************************ 00:32:44.538 END TEST accel_dif_verify 00:32:44.538 ************************************ 00:32:44.538 14:46:03 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:32:44.538 14:46:03 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:32:44.538 14:46:03 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:44.538 14:46:03 accel -- common/autotest_common.sh@10 -- # set +x 00:32:44.538 ************************************ 00:32:44.538 START TEST accel_dif_generate 00:32:44.538 ************************************ 00:32:44.538 14:46:03 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:32:44.538 14:46:03 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:32:44.538 14:46:03 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:32:44.538 14:46:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:32:44.538 14:46:03 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:32:44.538 14:46:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:32:44.538 14:46:03 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:32:44.538 14:46:03 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:32:44.538 14:46:03 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:32:44.538 14:46:03 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:32:44.538 14:46:03 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:44.538 14:46:03 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:44.538 14:46:03 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:32:44.538 14:46:03 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:32:44.538 14:46:03 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:32:44.538 [2024-07-22 14:46:03.987970] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:32:44.538 [2024-07-22 14:46:03.988073] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76194 ] 00:32:44.538 [2024-07-22 14:46:04.130166] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:44.798 [2024-07-22 14:46:04.185797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:32:44.798 14:46:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:32:44.799 14:46:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:32:46.177 14:46:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:32:46.177 14:46:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:32:46.177 14:46:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:32:46.177 14:46:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:32:46.178 14:46:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:32:46.178 14:46:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:32:46.178 14:46:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:32:46.178 14:46:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:32:46.178 14:46:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:32:46.178 14:46:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:32:46.178 14:46:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:32:46.178 14:46:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:32:46.178 14:46:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:32:46.178 14:46:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:32:46.178 14:46:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:32:46.178 14:46:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:32:46.178 14:46:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:32:46.178 14:46:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:32:46.178 14:46:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:32:46.178 14:46:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:32:46.178 14:46:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:32:46.178 14:46:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:32:46.178 14:46:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:32:46.178 14:46:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:32:46.178 14:46:05 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:32:46.178 14:46:05 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:32:46.178 14:46:05 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:46.178 00:32:46.178 real 0m1.413s 00:32:46.178 user 0m1.221s 00:32:46.178 sys 0m0.104s 00:32:46.178 14:46:05 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:46.178 14:46:05 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:32:46.178 ************************************ 00:32:46.178 END TEST accel_dif_generate 00:32:46.178 ************************************ 00:32:46.178 14:46:05 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:32:46.178 14:46:05 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:32:46.178 14:46:05 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:46.178 14:46:05 accel -- common/autotest_common.sh@10 -- # set +x 00:32:46.178 ************************************ 00:32:46.178 START TEST accel_dif_generate_copy 00:32:46.178 ************************************ 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:32:46.178 [2024-07-22 14:46:05.457360] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:32:46.178 [2024-07-22 14:46:05.457455] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76223 ] 00:32:46.178 [2024-07-22 14:46:05.591417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:46.178 [2024-07-22 14:46:05.653533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:32:46.178 14:46:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:32:47.559 14:46:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:32:47.559 14:46:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:32:47.559 14:46:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:32:47.559 14:46:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:32:47.559 14:46:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:32:47.559 14:46:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:32:47.559 14:46:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:32:47.559 14:46:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:32:47.559 14:46:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:32:47.559 14:46:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:32:47.559 14:46:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:32:47.559 14:46:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:32:47.559 14:46:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:32:47.559 14:46:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:32:47.559 14:46:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:32:47.559 14:46:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:32:47.559 14:46:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:32:47.559 14:46:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:32:47.559 14:46:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:32:47.559 14:46:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:32:47.559 14:46:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:32:47.559 14:46:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:32:47.559 14:46:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:32:47.559 14:46:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:32:47.559 14:46:06 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:32:47.559 14:46:06 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:32:47.559 14:46:06 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:47.559 00:32:47.559 real 0m1.414s 00:32:47.559 user 0m1.229s 00:32:47.559 sys 0m0.096s 00:32:47.559 14:46:06 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:47.559 14:46:06 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:32:47.559 ************************************ 00:32:47.559 END TEST accel_dif_generate_copy 00:32:47.559 ************************************ 00:32:47.559 14:46:06 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:32:47.559 14:46:06 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:32:47.559 14:46:06 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:32:47.559 14:46:06 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:47.559 14:46:06 accel -- common/autotest_common.sh@10 -- # set +x 00:32:47.559 ************************************ 00:32:47.559 START TEST accel_comp 00:32:47.559 ************************************ 00:32:47.559 14:46:06 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:32:47.559 14:46:06 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:32:47.559 14:46:06 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:32:47.559 14:46:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:32:47.559 14:46:06 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:32:47.559 14:46:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:32:47.559 14:46:06 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:32:47.560 14:46:06 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:32:47.560 14:46:06 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:32:47.560 14:46:06 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:32:47.560 14:46:06 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:47.560 14:46:06 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:47.560 14:46:06 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:32:47.560 14:46:06 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:32:47.560 14:46:06 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:32:47.560 [2024-07-22 14:46:06.927941] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:32:47.560 [2024-07-22 14:46:06.928134] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76263 ] 00:32:47.560 [2024-07-22 14:46:07.068104] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:47.560 [2024-07-22 14:46:07.123707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:32:47.560 14:46:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:32:47.820 14:46:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:32:47.820 14:46:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:32:47.820 14:46:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:32:48.759 14:46:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:32:48.759 14:46:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:32:48.759 14:46:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:32:48.759 14:46:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:32:48.759 14:46:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:32:48.759 14:46:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:32:48.759 14:46:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:32:48.759 14:46:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:32:48.759 14:46:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:32:48.759 14:46:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:32:48.759 14:46:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:32:48.759 14:46:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:32:48.759 14:46:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:32:48.759 14:46:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:32:48.759 14:46:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:32:48.759 14:46:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:32:48.759 14:46:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:32:48.759 14:46:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:32:48.759 14:46:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:32:48.759 14:46:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:32:48.759 14:46:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:32:48.759 14:46:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:32:48.759 14:46:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:32:48.759 14:46:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:32:48.759 14:46:08 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:32:48.759 14:46:08 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:32:48.759 14:46:08 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:48.759 00:32:48.759 real 0m1.414s 00:32:48.759 user 0m1.232s 00:32:48.759 sys 0m0.093s 00:32:48.759 14:46:08 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:48.759 14:46:08 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:32:48.759 ************************************ 00:32:48.759 END TEST accel_comp 00:32:48.759 ************************************ 00:32:48.759 14:46:08 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:32:48.759 14:46:08 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:32:48.759 14:46:08 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:48.759 14:46:08 accel -- common/autotest_common.sh@10 -- # set +x 00:32:48.759 ************************************ 00:32:48.759 START TEST accel_decomp 00:32:48.759 ************************************ 00:32:48.759 14:46:08 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:32:48.759 14:46:08 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:32:48.759 14:46:08 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:32:48.759 14:46:08 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:32:48.759 14:46:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:32:48.759 14:46:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:32:48.759 14:46:08 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:32:48.759 14:46:08 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:32:48.759 14:46:08 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:32:48.759 14:46:08 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:32:48.759 14:46:08 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:48.759 14:46:08 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:48.759 14:46:08 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:32:48.759 14:46:08 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:32:48.759 14:46:08 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:32:48.759 [2024-07-22 14:46:08.386890] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:32:48.759 [2024-07-22 14:46:08.387000] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76292 ] 00:32:49.019 [2024-07-22 14:46:08.529787] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:49.019 [2024-07-22 14:46:08.585850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:49.019 14:46:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:32:49.019 14:46:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:32:49.019 14:46:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:32:49.019 14:46:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:32:49.019 14:46:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:32:49.019 14:46:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:32:49.019 14:46:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:32:49.019 14:46:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:32:49.019 14:46:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:32:49.019 14:46:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:32:49.019 14:46:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:32:49.019 14:46:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:32:49.019 14:46:08 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:32:49.019 14:46:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:32:49.019 14:46:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:32:49.019 14:46:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:32:49.019 14:46:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:32:49.019 14:46:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:32:49.019 14:46:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:32:49.019 14:46:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:32:49.019 14:46:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:32:49.019 14:46:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:32:49.019 14:46:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:32:49.019 14:46:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:32:49.019 14:46:08 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:32:49.019 14:46:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:32:49.019 14:46:08 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:32:49.019 14:46:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:32:49.019 14:46:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:32:49.019 14:46:08 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:32:49.019 14:46:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:32:49.019 14:46:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:32:49.019 14:46:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:32:49.019 14:46:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:32:49.019 14:46:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:32:49.019 14:46:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:32:49.019 14:46:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:32:49.020 14:46:08 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:32:49.020 14:46:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:32:49.020 14:46:08 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:32:49.020 14:46:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:32:49.020 14:46:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:32:49.020 14:46:08 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:32:49.020 14:46:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:32:49.020 14:46:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:32:49.020 14:46:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:32:49.020 14:46:08 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:32:49.020 14:46:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:32:49.020 14:46:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:32:49.020 14:46:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:32:49.020 14:46:08 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:32:49.020 14:46:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:32:49.020 14:46:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:32:49.020 14:46:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:32:49.020 14:46:08 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:32:49.020 14:46:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:32:49.020 14:46:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:32:49.020 14:46:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:32:49.020 14:46:08 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:32:49.020 14:46:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:32:49.020 14:46:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:32:49.020 14:46:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:32:49.020 14:46:08 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:32:49.020 14:46:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:32:49.020 14:46:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:32:49.020 14:46:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:32:49.020 14:46:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:32:49.020 14:46:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:32:49.020 14:46:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:32:49.020 14:46:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:32:49.280 14:46:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:32:49.280 14:46:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:32:49.280 14:46:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:32:49.280 14:46:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:32:50.219 14:46:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:32:50.219 14:46:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:32:50.219 14:46:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:32:50.219 14:46:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:32:50.219 14:46:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:32:50.219 14:46:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:32:50.219 14:46:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:32:50.219 14:46:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:32:50.219 14:46:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:32:50.219 14:46:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:32:50.219 14:46:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:32:50.219 14:46:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:32:50.219 14:46:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:32:50.219 14:46:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:32:50.219 14:46:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:32:50.219 14:46:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:32:50.219 14:46:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:32:50.219 14:46:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:32:50.219 14:46:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:32:50.219 14:46:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:32:50.219 14:46:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:32:50.219 14:46:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:32:50.219 14:46:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:32:50.219 14:46:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:32:50.219 14:46:09 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:32:50.219 14:46:09 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:32:50.219 14:46:09 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:50.219 00:32:50.219 real 0m1.403s 00:32:50.219 user 0m0.021s 00:32:50.219 sys 0m0.002s 00:32:50.219 14:46:09 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:50.219 ************************************ 00:32:50.219 END TEST accel_decomp 00:32:50.219 ************************************ 00:32:50.219 14:46:09 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:32:50.219 14:46:09 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:32:50.219 14:46:09 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:32:50.219 14:46:09 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:50.219 14:46:09 accel -- common/autotest_common.sh@10 -- # set +x 00:32:50.219 ************************************ 00:32:50.219 START TEST accel_decmop_full 00:32:50.219 ************************************ 00:32:50.219 14:46:09 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:32:50.219 14:46:09 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:32:50.219 14:46:09 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:32:50.219 14:46:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:32:50.219 14:46:09 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:32:50.219 14:46:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:32:50.219 14:46:09 accel.accel_decmop_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:32:50.219 14:46:09 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:32:50.219 14:46:09 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:32:50.219 14:46:09 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:32:50.219 14:46:09 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:50.219 14:46:09 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:50.219 14:46:09 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:32:50.219 14:46:09 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:32:50.219 14:46:09 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:32:50.219 [2024-07-22 14:46:09.839066] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:32:50.219 [2024-07-22 14:46:09.839158] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76331 ] 00:32:50.480 [2024-07-22 14:46:09.984007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:50.480 [2024-07-22 14:46:10.039458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:32:50.480 14:46:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:32:51.861 14:46:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:32:51.861 14:46:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:32:51.861 14:46:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:32:51.861 14:46:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:32:51.861 14:46:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:32:51.861 14:46:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:32:51.861 14:46:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:32:51.861 14:46:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:32:51.861 14:46:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:32:51.861 14:46:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:32:51.861 14:46:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:32:51.861 14:46:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:32:51.861 14:46:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:32:51.861 14:46:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:32:51.861 14:46:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:32:51.861 14:46:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:32:51.861 14:46:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:32:51.861 14:46:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:32:51.861 14:46:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:32:51.861 14:46:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:32:51.861 14:46:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:32:51.861 14:46:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:32:51.861 14:46:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:32:51.861 14:46:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:32:51.861 14:46:11 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:32:51.861 14:46:11 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:32:51.861 14:46:11 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:51.861 00:32:51.861 real 0m1.416s 00:32:51.861 user 0m1.229s 00:32:51.861 sys 0m0.099s 00:32:51.861 14:46:11 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:51.861 14:46:11 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:32:51.862 ************************************ 00:32:51.862 END TEST accel_decmop_full 00:32:51.862 ************************************ 00:32:51.862 14:46:11 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:32:51.862 14:46:11 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:32:51.862 14:46:11 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:51.862 14:46:11 accel -- common/autotest_common.sh@10 -- # set +x 00:32:51.862 ************************************ 00:32:51.862 START TEST accel_decomp_mcore 00:32:51.862 ************************************ 00:32:51.862 14:46:11 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:32:51.862 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:32:51.862 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:32:51.862 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:51.862 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:32:51.862 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:51.862 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:32:51.862 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:32:51.862 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:32:51.862 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:32:51.862 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:51.862 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:51.862 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:32:51.862 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:32:51.862 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:32:51.862 [2024-07-22 14:46:11.320299] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:32:51.862 [2024-07-22 14:46:11.320487] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76361 ] 00:32:51.862 [2024-07-22 14:46:11.454887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:52.122 [2024-07-22 14:46:11.515134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:52.122 [2024-07-22 14:46:11.515332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:52.122 [2024-07-22 14:46:11.515524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:52.122 [2024-07-22 14:46:11.515525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:52.122 14:46:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:53.512 14:46:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:32:53.512 14:46:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:32:53.512 14:46:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:53.512 14:46:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:53.512 14:46:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:32:53.512 14:46:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:32:53.512 14:46:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:53.512 14:46:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:53.512 14:46:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:32:53.512 14:46:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:32:53.512 14:46:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:53.512 14:46:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:53.512 14:46:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:32:53.512 14:46:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:32:53.512 14:46:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:53.512 14:46:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:53.512 14:46:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:32:53.512 14:46:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:32:53.512 14:46:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:53.512 14:46:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:53.512 14:46:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:32:53.512 14:46:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:32:53.512 14:46:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:53.512 14:46:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:53.512 14:46:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:32:53.512 14:46:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:32:53.512 14:46:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:53.512 14:46:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:53.512 14:46:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:32:53.512 14:46:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:32:53.512 14:46:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:53.512 14:46:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:53.512 14:46:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:32:53.512 14:46:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:32:53.512 14:46:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:53.512 14:46:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:53.512 ************************************ 00:32:53.512 END TEST accel_decomp_mcore 00:32:53.512 ************************************ 00:32:53.512 14:46:12 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:32:53.512 14:46:12 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:32:53.512 14:46:12 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:53.512 00:32:53.512 real 0m1.438s 00:32:53.512 user 0m4.574s 00:32:53.512 sys 0m0.116s 00:32:53.512 14:46:12 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:53.512 14:46:12 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:32:53.512 14:46:12 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:32:53.512 14:46:12 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:32:53.512 14:46:12 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:53.512 14:46:12 accel -- common/autotest_common.sh@10 -- # set +x 00:32:53.512 ************************************ 00:32:53.512 START TEST accel_decomp_full_mcore 00:32:53.512 ************************************ 00:32:53.512 14:46:12 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:32:53.512 14:46:12 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:32:53.512 14:46:12 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:32:53.512 14:46:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:53.512 14:46:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:53.512 14:46:12 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:32:53.512 14:46:12 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:32:53.512 14:46:12 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:32:53.512 14:46:12 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:32:53.512 14:46:12 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:32:53.512 14:46:12 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:53.512 14:46:12 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:53.512 14:46:12 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:32:53.512 14:46:12 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:32:53.512 14:46:12 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:32:53.512 [2024-07-22 14:46:12.810109] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:32:53.512 [2024-07-22 14:46:12.810204] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76393 ] 00:32:53.512 [2024-07-22 14:46:12.934468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:53.512 [2024-07-22 14:46:13.013595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:53.512 [2024-07-22 14:46:13.013721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:53.512 [2024-07-22 14:46:13.013835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:53.512 [2024-07-22 14:46:13.013840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:53.512 14:46:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:54.904 14:46:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:32:54.904 14:46:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:32:54.904 14:46:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:54.904 14:46:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:54.904 14:46:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:32:54.904 14:46:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:32:54.904 14:46:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:54.904 14:46:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:54.904 14:46:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:32:54.904 14:46:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:32:54.904 14:46:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:54.904 14:46:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:54.904 14:46:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:32:54.904 14:46:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:32:54.904 14:46:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:54.904 14:46:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:54.904 14:46:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:32:54.904 14:46:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:32:54.904 14:46:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:54.904 14:46:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:54.904 14:46:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:32:54.904 14:46:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:32:54.904 14:46:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:54.904 14:46:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:54.904 14:46:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:32:54.904 14:46:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:32:54.904 14:46:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:54.904 14:46:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:54.904 14:46:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:32:54.904 14:46:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:32:54.904 14:46:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:54.904 14:46:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:54.904 14:46:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:32:54.904 14:46:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:32:54.904 14:46:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:32:54.904 14:46:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:32:54.904 14:46:14 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:32:54.905 14:46:14 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:32:54.905 14:46:14 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:54.905 00:32:54.905 real 0m1.457s 00:32:54.905 user 0m4.630s 00:32:54.905 sys 0m0.126s 00:32:54.905 14:46:14 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:54.905 14:46:14 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:32:54.905 ************************************ 00:32:54.905 END TEST accel_decomp_full_mcore 00:32:54.905 ************************************ 00:32:54.905 14:46:14 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:32:54.905 14:46:14 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:32:54.905 14:46:14 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:54.905 14:46:14 accel -- common/autotest_common.sh@10 -- # set +x 00:32:54.905 ************************************ 00:32:54.905 START TEST accel_decomp_mthread 00:32:54.905 ************************************ 00:32:54.905 14:46:14 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:32:54.905 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:32:54.905 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:32:54.905 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:32:54.905 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:32:54.905 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:32:54.905 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:32:54.905 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:32:54.905 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:32:54.905 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:32:54.905 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:54.905 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:54.905 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:32:54.905 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:32:54.905 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:32:54.905 [2024-07-22 14:46:14.324605] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:32:54.905 [2024-07-22 14:46:14.324850] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76436 ] 00:32:54.905 [2024-07-22 14:46:14.467551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:54.905 [2024-07-22 14:46:14.523139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:32:55.165 14:46:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:32:56.104 14:46:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:32:56.104 14:46:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:32:56.104 14:46:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:32:56.104 14:46:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:32:56.104 14:46:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:32:56.104 14:46:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:32:56.104 14:46:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:32:56.104 14:46:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:32:56.104 14:46:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:32:56.104 14:46:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:32:56.104 14:46:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:32:56.104 14:46:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:32:56.104 14:46:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:32:56.104 14:46:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:32:56.104 14:46:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:32:56.104 14:46:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:32:56.104 14:46:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:32:56.104 14:46:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:32:56.104 14:46:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:32:56.104 14:46:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:32:56.104 14:46:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:32:56.104 14:46:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:32:56.104 14:46:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:32:56.104 14:46:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:32:56.104 14:46:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:32:56.104 14:46:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:32:56.104 14:46:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:32:56.104 14:46:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:32:56.104 14:46:15 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:32:56.104 14:46:15 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:32:56.104 14:46:15 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:56.104 00:32:56.104 real 0m1.423s 00:32:56.104 user 0m1.243s 00:32:56.104 sys 0m0.093s 00:32:56.104 14:46:15 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:56.104 14:46:15 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:32:56.104 ************************************ 00:32:56.104 END TEST accel_decomp_mthread 00:32:56.104 ************************************ 00:32:56.363 14:46:15 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:32:56.363 14:46:15 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:32:56.363 14:46:15 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:56.363 14:46:15 accel -- common/autotest_common.sh@10 -- # set +x 00:32:56.363 ************************************ 00:32:56.363 START TEST accel_decomp_full_mthread 00:32:56.363 ************************************ 00:32:56.363 14:46:15 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:32:56.363 14:46:15 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:32:56.363 14:46:15 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:32:56.363 14:46:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:32:56.364 14:46:15 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:32:56.364 14:46:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:32:56.364 14:46:15 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:32:56.364 14:46:15 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:32:56.364 14:46:15 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:32:56.364 14:46:15 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:32:56.364 14:46:15 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:56.364 14:46:15 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:56.364 14:46:15 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:32:56.364 14:46:15 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:32:56.364 14:46:15 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:32:56.364 [2024-07-22 14:46:15.792835] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:32:56.364 [2024-07-22 14:46:15.793037] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76465 ] 00:32:56.364 [2024-07-22 14:46:15.937080] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:56.364 [2024-07-22 14:46:15.993285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:56.622 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:32:56.622 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:32:56.622 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:32:56.622 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:32:56.623 14:46:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:32:58.002 14:46:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:32:58.002 14:46:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:32:58.002 14:46:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:32:58.002 14:46:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:32:58.002 14:46:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:32:58.002 14:46:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:32:58.002 14:46:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:32:58.002 14:46:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:32:58.002 14:46:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:32:58.002 14:46:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:32:58.002 14:46:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:32:58.002 14:46:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:32:58.002 14:46:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:32:58.002 14:46:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:32:58.002 14:46:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:32:58.002 14:46:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:32:58.002 14:46:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:32:58.002 14:46:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:32:58.002 14:46:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:32:58.002 14:46:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:32:58.002 14:46:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:32:58.002 14:46:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:32:58.002 14:46:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:32:58.002 14:46:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:32:58.002 14:46:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:32:58.002 14:46:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:32:58.002 14:46:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:32:58.002 14:46:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:32:58.002 14:46:17 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:32:58.002 14:46:17 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:32:58.002 ************************************ 00:32:58.002 END TEST accel_decomp_full_mthread 00:32:58.002 ************************************ 00:32:58.003 14:46:17 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:58.003 00:32:58.003 real 0m1.435s 00:32:58.003 user 0m1.236s 00:32:58.003 sys 0m0.110s 00:32:58.003 14:46:17 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:58.003 14:46:17 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:32:58.003 14:46:17 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:32:58.003 14:46:17 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:32:58.003 14:46:17 accel -- accel/accel.sh@137 -- # build_accel_config 00:32:58.003 14:46:17 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:32:58.003 14:46:17 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:32:58.003 14:46:17 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:58.003 14:46:17 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:32:58.003 14:46:17 accel -- common/autotest_common.sh@10 -- # set +x 00:32:58.003 14:46:17 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:32:58.003 14:46:17 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:32:58.003 14:46:17 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:32:58.003 14:46:17 accel -- accel/accel.sh@40 -- # local IFS=, 00:32:58.003 14:46:17 accel -- accel/accel.sh@41 -- # jq -r . 00:32:58.003 ************************************ 00:32:58.003 START TEST accel_dif_functional_tests 00:32:58.003 ************************************ 00:32:58.003 14:46:17 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:32:58.003 [2024-07-22 14:46:17.302198] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:32:58.003 [2024-07-22 14:46:17.302289] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76506 ] 00:32:58.003 [2024-07-22 14:46:17.446262] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:58.003 [2024-07-22 14:46:17.501260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:58.003 [2024-07-22 14:46:17.501391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:58.003 [2024-07-22 14:46:17.501392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:58.003 00:32:58.003 00:32:58.003 CUnit - A unit testing framework for C - Version 2.1-3 00:32:58.003 http://cunit.sourceforge.net/ 00:32:58.003 00:32:58.003 00:32:58.003 Suite: accel_dif 00:32:58.003 Test: verify: DIF generated, GUARD check ...passed 00:32:58.003 Test: verify: DIF generated, APPTAG check ...passed 00:32:58.003 Test: verify: DIF generated, REFTAG check ...passed 00:32:58.003 Test: verify: DIF not generated, GUARD check ...passed 00:32:58.003 Test: verify: DIF not generated, APPTAG check ...passed 00:32:58.003 Test: verify: DIF not generated, REFTAG check ...[2024-07-22 14:46:17.569376] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:32:58.003 [2024-07-22 14:46:17.569456] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:32:58.003 [2024-07-22 14:46:17.569536] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:32:58.003 passed 00:32:58.003 Test: verify: APPTAG correct, APPTAG check ...passed 00:32:58.003 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-22 14:46:17.569597] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:32:58.003 passed 00:32:58.003 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:32:58.003 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:32:58.003 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:32:58.003 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-22 14:46:17.569810] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:32:58.003 passed 00:32:58.003 Test: verify copy: DIF generated, GUARD check ...passed 00:32:58.003 Test: verify copy: DIF generated, APPTAG check ...passed 00:32:58.003 Test: verify copy: DIF generated, REFTAG check ...passed 00:32:58.003 Test: verify copy: DIF not generated, GUARD check ...[2024-07-22 14:46:17.570015] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:32:58.003 passed 00:32:58.003 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-22 14:46:17.570060] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:32:58.003 passed 00:32:58.003 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-22 14:46:17.570090] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:32:58.003 passed 00:32:58.003 Test: generate copy: DIF generated, GUARD check ...passed 00:32:58.003 Test: generate copy: DIF generated, APTTAG check ...passed 00:32:58.003 Test: generate copy: DIF generated, REFTAG check ...passed 00:32:58.003 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:32:58.003 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:32:58.003 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:32:58.003 Test: generate copy: iovecs-len validate ...[2024-07-22 14:46:17.570386] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:32:58.003 passed 00:32:58.003 Test: generate copy: buffer alignment validate ...passed 00:32:58.003 00:32:58.003 Run Summary: Type Total Ran Passed Failed Inactive 00:32:58.003 suites 1 1 n/a 0 0 00:32:58.003 tests 26 26 26 0 0 00:32:58.003 asserts 115 115 115 0 n/a 00:32:58.003 00:32:58.003 Elapsed time = 0.002 seconds 00:32:58.262 00:32:58.262 real 0m0.492s 00:32:58.262 user 0m0.607s 00:32:58.262 sys 0m0.140s 00:32:58.262 ************************************ 00:32:58.262 END TEST accel_dif_functional_tests 00:32:58.262 ************************************ 00:32:58.262 14:46:17 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:58.262 14:46:17 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:32:58.262 00:32:58.262 real 0m32.435s 00:32:58.262 user 0m34.365s 00:32:58.262 sys 0m3.674s 00:32:58.262 ************************************ 00:32:58.262 END TEST accel 00:32:58.262 ************************************ 00:32:58.262 14:46:17 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:58.262 14:46:17 accel -- common/autotest_common.sh@10 -- # set +x 00:32:58.262 14:46:17 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:32:58.262 14:46:17 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:32:58.262 14:46:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:58.262 14:46:17 -- common/autotest_common.sh@10 -- # set +x 00:32:58.262 ************************************ 00:32:58.262 START TEST accel_rpc 00:32:58.262 ************************************ 00:32:58.262 14:46:17 accel_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:32:58.522 * Looking for test storage... 00:32:58.522 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:32:58.522 14:46:17 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:32:58.522 14:46:17 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=76565 00:32:58.522 14:46:17 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 76565 00:32:58.522 14:46:17 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 76565 ']' 00:32:58.522 14:46:17 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:58.522 14:46:17 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:58.522 14:46:17 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:58.522 14:46:17 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:32:58.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:58.522 14:46:17 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:58.522 14:46:17 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:58.522 [2024-07-22 14:46:18.057010] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:32:58.522 [2024-07-22 14:46:18.057664] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76565 ] 00:32:58.782 [2024-07-22 14:46:18.196954] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:58.782 [2024-07-22 14:46:18.250104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:59.351 14:46:18 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:59.351 14:46:18 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:32:59.351 14:46:18 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:32:59.351 14:46:18 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:32:59.351 14:46:18 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:32:59.351 14:46:18 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:32:59.351 14:46:18 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:32:59.351 14:46:18 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:32:59.351 14:46:18 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:59.351 14:46:18 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:59.351 ************************************ 00:32:59.351 START TEST accel_assign_opcode 00:32:59.351 ************************************ 00:32:59.351 14:46:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:32:59.351 14:46:18 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:32:59.351 14:46:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.351 14:46:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:32:59.351 [2024-07-22 14:46:18.925312] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:32:59.351 14:46:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.351 14:46:18 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:32:59.351 14:46:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.351 14:46:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:32:59.351 [2024-07-22 14:46:18.937292] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:32:59.351 14:46:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.351 14:46:18 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:32:59.351 14:46:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.351 14:46:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:32:59.610 14:46:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.610 14:46:19 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:32:59.610 14:46:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.610 14:46:19 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:32:59.610 14:46:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:32:59.610 14:46:19 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:32:59.610 14:46:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.610 software 00:32:59.610 ************************************ 00:32:59.610 END TEST accel_assign_opcode 00:32:59.610 ************************************ 00:32:59.610 00:32:59.610 real 0m0.248s 00:32:59.610 user 0m0.052s 00:32:59.610 sys 0m0.015s 00:32:59.610 14:46:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:59.610 14:46:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:32:59.610 14:46:19 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 76565 00:32:59.610 14:46:19 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 76565 ']' 00:32:59.610 14:46:19 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 76565 00:32:59.610 14:46:19 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:32:59.610 14:46:19 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:59.610 14:46:19 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76565 00:32:59.868 14:46:19 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:59.868 killing process with pid 76565 00:32:59.868 14:46:19 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:59.868 14:46:19 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76565' 00:32:59.868 14:46:19 accel_rpc -- common/autotest_common.sh@965 -- # kill 76565 00:32:59.868 14:46:19 accel_rpc -- common/autotest_common.sh@970 -- # wait 76565 00:33:00.126 ************************************ 00:33:00.126 END TEST accel_rpc 00:33:00.126 ************************************ 00:33:00.126 00:33:00.126 real 0m1.708s 00:33:00.126 user 0m1.731s 00:33:00.126 sys 0m0.442s 00:33:00.126 14:46:19 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:00.126 14:46:19 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:33:00.126 14:46:19 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:33:00.126 14:46:19 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:33:00.126 14:46:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:00.126 14:46:19 -- common/autotest_common.sh@10 -- # set +x 00:33:00.126 ************************************ 00:33:00.126 START TEST app_cmdline 00:33:00.126 ************************************ 00:33:00.126 14:46:19 app_cmdline -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:33:00.126 * Looking for test storage... 00:33:00.126 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:33:00.126 14:46:19 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:33:00.126 14:46:19 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=76676 00:33:00.126 14:46:19 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:33:00.126 14:46:19 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 76676 00:33:00.126 14:46:19 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 76676 ']' 00:33:00.126 14:46:19 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:00.126 14:46:19 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:00.126 14:46:19 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:00.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:00.126 14:46:19 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:00.126 14:46:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:33:00.384 [2024-07-22 14:46:19.808798] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:00.384 [2024-07-22 14:46:19.808869] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76676 ] 00:33:00.384 [2024-07-22 14:46:19.946993] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:00.384 [2024-07-22 14:46:20.002442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:01.321 14:46:20 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:01.321 14:46:20 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:33:01.321 14:46:20 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:33:01.321 { 00:33:01.321 "fields": { 00:33:01.321 "commit": "5fa2f5086", 00:33:01.321 "major": 24, 00:33:01.321 "minor": 5, 00:33:01.321 "patch": 1, 00:33:01.321 "suffix": "-pre" 00:33:01.321 }, 00:33:01.321 "version": "SPDK v24.05.1-pre git sha1 5fa2f5086" 00:33:01.321 } 00:33:01.321 14:46:20 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:33:01.321 14:46:20 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:33:01.321 14:46:20 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:33:01.321 14:46:20 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:33:01.321 14:46:20 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:33:01.321 14:46:20 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.321 14:46:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:33:01.321 14:46:20 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:33:01.321 14:46:20 app_cmdline -- app/cmdline.sh@26 -- # sort 00:33:01.321 14:46:20 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.321 14:46:20 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:33:01.321 14:46:20 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:33:01.321 14:46:20 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:33:01.321 14:46:20 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:33:01.321 14:46:20 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:33:01.321 14:46:20 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:01.321 14:46:20 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:01.321 14:46:20 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:01.321 14:46:20 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:01.321 14:46:20 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:01.322 14:46:20 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:01.322 14:46:20 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:01.322 14:46:20 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:33:01.322 14:46:20 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:33:01.582 2024/07/22 14:46:21 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:33:01.582 request: 00:33:01.582 { 00:33:01.582 "method": "env_dpdk_get_mem_stats", 00:33:01.582 "params": {} 00:33:01.582 } 00:33:01.582 Got JSON-RPC error response 00:33:01.582 GoRPCClient: error on JSON-RPC call 00:33:01.582 14:46:21 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:33:01.582 14:46:21 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:01.582 14:46:21 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:01.582 14:46:21 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:01.582 14:46:21 app_cmdline -- app/cmdline.sh@1 -- # killprocess 76676 00:33:01.582 14:46:21 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 76676 ']' 00:33:01.582 14:46:21 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 76676 00:33:01.582 14:46:21 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:33:01.582 14:46:21 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:01.582 14:46:21 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76676 00:33:01.582 14:46:21 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:01.582 14:46:21 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:01.582 killing process with pid 76676 00:33:01.582 14:46:21 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76676' 00:33:01.582 14:46:21 app_cmdline -- common/autotest_common.sh@965 -- # kill 76676 00:33:01.582 14:46:21 app_cmdline -- common/autotest_common.sh@970 -- # wait 76676 00:33:02.148 00:33:02.148 real 0m1.842s 00:33:02.148 user 0m2.172s 00:33:02.148 sys 0m0.455s 00:33:02.148 14:46:21 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:02.148 14:46:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:33:02.148 ************************************ 00:33:02.148 END TEST app_cmdline 00:33:02.148 ************************************ 00:33:02.148 14:46:21 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:33:02.148 14:46:21 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:33:02.148 14:46:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:02.148 14:46:21 -- common/autotest_common.sh@10 -- # set +x 00:33:02.148 ************************************ 00:33:02.148 START TEST version 00:33:02.148 ************************************ 00:33:02.148 14:46:21 version -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:33:02.148 * Looking for test storage... 00:33:02.148 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:33:02.148 14:46:21 version -- app/version.sh@17 -- # get_header_version major 00:33:02.148 14:46:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:33:02.149 14:46:21 version -- app/version.sh@14 -- # cut -f2 00:33:02.149 14:46:21 version -- app/version.sh@14 -- # tr -d '"' 00:33:02.149 14:46:21 version -- app/version.sh@17 -- # major=24 00:33:02.149 14:46:21 version -- app/version.sh@18 -- # get_header_version minor 00:33:02.149 14:46:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:33:02.149 14:46:21 version -- app/version.sh@14 -- # cut -f2 00:33:02.149 14:46:21 version -- app/version.sh@14 -- # tr -d '"' 00:33:02.149 14:46:21 version -- app/version.sh@18 -- # minor=5 00:33:02.149 14:46:21 version -- app/version.sh@19 -- # get_header_version patch 00:33:02.149 14:46:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:33:02.149 14:46:21 version -- app/version.sh@14 -- # tr -d '"' 00:33:02.149 14:46:21 version -- app/version.sh@14 -- # cut -f2 00:33:02.149 14:46:21 version -- app/version.sh@19 -- # patch=1 00:33:02.149 14:46:21 version -- app/version.sh@20 -- # get_header_version suffix 00:33:02.149 14:46:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:33:02.149 14:46:21 version -- app/version.sh@14 -- # cut -f2 00:33:02.149 14:46:21 version -- app/version.sh@14 -- # tr -d '"' 00:33:02.149 14:46:21 version -- app/version.sh@20 -- # suffix=-pre 00:33:02.149 14:46:21 version -- app/version.sh@22 -- # version=24.5 00:33:02.149 14:46:21 version -- app/version.sh@25 -- # (( patch != 0 )) 00:33:02.149 14:46:21 version -- app/version.sh@25 -- # version=24.5.1 00:33:02.149 14:46:21 version -- app/version.sh@28 -- # version=24.5.1rc0 00:33:02.149 14:46:21 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:33:02.149 14:46:21 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:33:02.149 14:46:21 version -- app/version.sh@30 -- # py_version=24.5.1rc0 00:33:02.149 14:46:21 version -- app/version.sh@31 -- # [[ 24.5.1rc0 == \2\4\.\5\.\1\r\c\0 ]] 00:33:02.149 00:33:02.149 real 0m0.208s 00:33:02.149 user 0m0.118s 00:33:02.149 sys 0m0.137s 00:33:02.149 14:46:21 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:02.149 14:46:21 version -- common/autotest_common.sh@10 -- # set +x 00:33:02.149 ************************************ 00:33:02.149 END TEST version 00:33:02.149 ************************************ 00:33:02.407 14:46:21 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:33:02.407 14:46:21 -- spdk/autotest.sh@198 -- # uname -s 00:33:02.407 14:46:21 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:33:02.407 14:46:21 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:33:02.407 14:46:21 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:33:02.407 14:46:21 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:33:02.407 14:46:21 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:33:02.407 14:46:21 -- spdk/autotest.sh@260 -- # timing_exit lib 00:33:02.407 14:46:21 -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:02.407 14:46:21 -- common/autotest_common.sh@10 -- # set +x 00:33:02.407 14:46:21 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:33:02.407 14:46:21 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:33:02.407 14:46:21 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:33:02.407 14:46:21 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:33:02.407 14:46:21 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:33:02.407 14:46:21 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:33:02.407 14:46:21 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:33:02.407 14:46:21 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:02.408 14:46:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:02.408 14:46:21 -- common/autotest_common.sh@10 -- # set +x 00:33:02.408 ************************************ 00:33:02.408 START TEST nvmf_tcp 00:33:02.408 ************************************ 00:33:02.408 14:46:21 nvmf_tcp -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:33:02.408 * Looking for test storage... 00:33:02.408 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:33:02.408 14:46:21 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:33:02.408 14:46:21 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:33:02.408 14:46:21 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:02.408 14:46:21 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:33:02.408 14:46:21 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:02.408 14:46:21 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:02.408 14:46:21 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:02.408 14:46:21 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:02.408 14:46:21 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:02.408 14:46:21 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:02.408 14:46:21 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:02.408 14:46:21 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:02.408 14:46:21 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:02.408 14:46:21 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:02.408 14:46:21 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:33:02.408 14:46:21 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:33:02.408 14:46:21 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:02.408 14:46:21 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:02.408 14:46:21 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:02.408 14:46:21 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:02.408 14:46:21 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:02.408 14:46:21 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:02.408 14:46:21 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:02.408 14:46:21 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:02.408 14:46:21 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.408 14:46:21 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.408 14:46:21 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.408 14:46:21 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:33:02.408 14:46:21 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.408 14:46:21 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:33:02.408 14:46:21 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:02.408 14:46:21 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:02.408 14:46:21 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:02.408 14:46:21 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:02.408 14:46:21 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:02.408 14:46:21 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:02.408 14:46:21 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:02.408 14:46:21 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:02.408 14:46:21 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:33:02.408 14:46:21 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:33:02.408 14:46:21 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:33:02.408 14:46:21 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:02.408 14:46:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:02.408 14:46:21 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:33:02.408 14:46:21 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:33:02.408 14:46:21 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:02.408 14:46:21 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:02.408 14:46:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:02.408 ************************************ 00:33:02.408 START TEST nvmf_example 00:33:02.408 ************************************ 00:33:02.408 14:46:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:33:02.666 * Looking for test storage... 00:33:02.666 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:33:02.666 14:46:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:02.666 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:33:02.666 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:02.666 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:02.666 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:02.666 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:02.666 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:02.666 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:02.666 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:02.666 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:02.666 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:02.666 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:02.666 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:33:02.666 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:33:02.666 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:02.666 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:02.666 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:02.666 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:02.666 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:02.666 14:46:22 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:02.666 14:46:22 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:02.666 14:46:22 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:02.666 14:46:22 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.666 14:46:22 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.666 14:46:22 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.666 14:46:22 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:33:02.666 14:46:22 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.666 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:33:02.666 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@432 -- # nvmf_veth_init 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:33:02.667 Cannot find device "nvmf_init_br" 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # true 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:33:02.667 Cannot find device "nvmf_tgt_br" 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # true 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:33:02.667 Cannot find device "nvmf_tgt_br2" 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # true 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:33:02.667 Cannot find device "nvmf_init_br" 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # true 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:33:02.667 Cannot find device "nvmf_tgt_br" 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # true 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:33:02.667 Cannot find device "nvmf_tgt_br2" 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # true 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:33:02.667 Cannot find device "nvmf_br" 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # true 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:33:02.667 Cannot find device "nvmf_init_if" 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # true 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:02.667 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # true 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:02.667 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # true 00:33:02.667 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:33:02.925 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:02.925 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:02.925 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:02.925 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:02.925 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:02.925 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:02.925 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:33:02.925 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:33:02.925 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:33:02.925 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:33:02.925 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:33:02.925 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:33:02.925 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:02.925 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:02.925 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:02.925 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:33:02.925 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:33:02.925 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:33:02.925 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:02.925 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:02.925 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:02.925 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:02.925 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:33:02.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:02.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:33:02.925 00:33:02.925 --- 10.0.0.2 ping statistics --- 00:33:02.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:02.925 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:33:02.925 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:33:02.925 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:02.925 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.114 ms 00:33:02.925 00:33:02.925 --- 10.0.0.3 ping statistics --- 00:33:02.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:02.925 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:33:02.925 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:02.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:02.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:33:02.925 00:33:02.925 --- 10.0.0.1 ping statistics --- 00:33:02.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:02.925 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:33:02.925 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:02.925 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@433 -- # return 0 00:33:03.231 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:03.231 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:03.231 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:03.231 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:03.231 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:03.231 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:03.231 14:46:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:03.231 14:46:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:33:03.231 14:46:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:33:03.231 14:46:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:03.231 14:46:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:33:03.231 14:46:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:33:03.231 14:46:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:33:03.231 14:46:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=77034 00:33:03.231 14:46:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:03.231 14:46:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 77034 00:33:03.231 14:46:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:33:03.231 14:46:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@827 -- # '[' -z 77034 ']' 00:33:03.231 14:46:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:03.231 14:46:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:03.231 14:46:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:03.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:03.231 14:46:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:03.231 14:46:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:33:04.166 14:46:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:04.166 14:46:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@860 -- # return 0 00:33:04.166 14:46:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:33:04.166 14:46:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:04.166 14:46:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:33:04.166 14:46:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:04.166 14:46:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.166 14:46:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:33:04.166 14:46:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.166 14:46:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:33:04.166 14:46:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.166 14:46:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:33:04.166 14:46:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.166 14:46:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:33:04.166 14:46:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:04.166 14:46:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.166 14:46:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:33:04.166 14:46:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.166 14:46:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:33:04.166 14:46:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:04.166 14:46:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.166 14:46:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:33:04.166 14:46:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.166 14:46:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:04.166 14:46:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.166 14:46:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:33:04.166 14:46:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.166 14:46:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:33:04.166 14:46:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:16.426 Initializing NVMe Controllers 00:33:16.426 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:16.426 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:16.426 Initialization complete. Launching workers. 00:33:16.426 ======================================================== 00:33:16.426 Latency(us) 00:33:16.426 Device Information : IOPS MiB/s Average min max 00:33:16.426 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16521.46 64.54 3873.67 588.22 23216.34 00:33:16.426 ======================================================== 00:33:16.426 Total : 16521.46 64.54 3873.67 588.22 23216.34 00:33:16.426 00:33:16.426 14:46:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:33:16.426 14:46:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:33:16.426 14:46:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:16.426 14:46:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:33:16.426 14:46:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:16.426 14:46:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:33:16.426 14:46:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:16.426 14:46:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:16.426 rmmod nvme_tcp 00:33:16.426 rmmod nvme_fabrics 00:33:16.426 rmmod nvme_keyring 00:33:16.426 14:46:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:16.426 14:46:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:33:16.426 14:46:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:33:16.426 14:46:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 77034 ']' 00:33:16.426 14:46:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 77034 00:33:16.426 14:46:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@946 -- # '[' -z 77034 ']' 00:33:16.426 14:46:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@950 -- # kill -0 77034 00:33:16.426 14:46:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # uname 00:33:16.426 14:46:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:16.426 14:46:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77034 00:33:16.426 killing process with pid 77034 00:33:16.426 14:46:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # process_name=nvmf 00:33:16.426 14:46:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:33:16.426 14:46:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77034' 00:33:16.426 14:46:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # kill 77034 00:33:16.426 14:46:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@970 -- # wait 77034 00:33:16.426 nvmf threads initialize successfully 00:33:16.426 bdev subsystem init successfully 00:33:16.426 created a nvmf target service 00:33:16.426 create targets's poll groups done 00:33:16.426 all subsystems of target started 00:33:16.426 nvmf target is running 00:33:16.426 all subsystems of target stopped 00:33:16.426 destroy targets's poll groups done 00:33:16.426 destroyed the nvmf target service 00:33:16.426 bdev subsystem finish successfully 00:33:16.426 nvmf threads destroy successfully 00:33:16.426 14:46:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:16.426 14:46:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:16.426 14:46:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:16.426 14:46:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:16.426 14:46:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:16.426 14:46:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:16.426 14:46:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:16.426 14:46:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:16.426 14:46:34 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:33:16.426 14:46:34 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:33:16.426 14:46:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:16.426 14:46:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:33:16.426 00:33:16.426 real 0m12.291s 00:33:16.426 user 0m44.471s 00:33:16.426 sys 0m1.676s 00:33:16.426 14:46:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:16.426 14:46:34 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:33:16.426 ************************************ 00:33:16.426 END TEST nvmf_example 00:33:16.426 ************************************ 00:33:16.426 14:46:34 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:33:16.426 14:46:34 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:16.426 14:46:34 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:16.426 14:46:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:16.426 ************************************ 00:33:16.426 START TEST nvmf_filesystem 00:33:16.426 ************************************ 00:33:16.426 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:33:16.426 * Looking for test storage... 00:33:16.426 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:33:16.426 14:46:34 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:33:16.426 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:33:16.426 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:33:16.426 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:33:16.426 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:33:16.426 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@38 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:33:16.426 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@43 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:33:16.426 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:33:16.426 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:33:16.426 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:33:16.426 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:33:16.426 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:33:16.426 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:33:16.426 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:33:16.426 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:33:16.426 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:33:16.426 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:33:16.426 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:33:16.426 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:33:16.426 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:33:16.426 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=y 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=y 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@53 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:33:16.427 14:46:34 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:33:16.427 #define SPDK_CONFIG_H 00:33:16.427 #define SPDK_CONFIG_APPS 1 00:33:16.427 #define SPDK_CONFIG_ARCH native 00:33:16.427 #undef SPDK_CONFIG_ASAN 00:33:16.427 #define SPDK_CONFIG_AVAHI 1 00:33:16.427 #undef SPDK_CONFIG_CET 00:33:16.427 #define SPDK_CONFIG_COVERAGE 1 00:33:16.427 #define SPDK_CONFIG_CROSS_PREFIX 00:33:16.427 #undef SPDK_CONFIG_CRYPTO 00:33:16.427 #undef SPDK_CONFIG_CRYPTO_MLX5 00:33:16.427 #undef SPDK_CONFIG_CUSTOMOCF 00:33:16.427 #undef SPDK_CONFIG_DAOS 00:33:16.427 #define SPDK_CONFIG_DAOS_DIR 00:33:16.427 #define SPDK_CONFIG_DEBUG 1 00:33:16.427 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:33:16.427 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:33:16.427 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:33:16.427 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:33:16.427 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:33:16.427 #undef SPDK_CONFIG_DPDK_UADK 00:33:16.427 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:33:16.427 #define SPDK_CONFIG_EXAMPLES 1 00:33:16.427 #undef SPDK_CONFIG_FC 00:33:16.427 #define SPDK_CONFIG_FC_PATH 00:33:16.427 #define SPDK_CONFIG_FIO_PLUGIN 1 00:33:16.427 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:33:16.427 #undef SPDK_CONFIG_FUSE 00:33:16.427 #undef SPDK_CONFIG_FUZZER 00:33:16.427 #define SPDK_CONFIG_FUZZER_LIB 00:33:16.427 #define SPDK_CONFIG_GOLANG 1 00:33:16.427 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:33:16.427 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:33:16.427 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:33:16.427 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:33:16.427 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:33:16.427 #undef SPDK_CONFIG_HAVE_LIBBSD 00:33:16.427 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:33:16.427 #define SPDK_CONFIG_IDXD 1 00:33:16.427 #define SPDK_CONFIG_IDXD_KERNEL 1 00:33:16.427 #undef SPDK_CONFIG_IPSEC_MB 00:33:16.427 #define SPDK_CONFIG_IPSEC_MB_DIR 00:33:16.427 #define SPDK_CONFIG_ISAL 1 00:33:16.427 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:33:16.427 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:33:16.427 #define SPDK_CONFIG_LIBDIR 00:33:16.427 #undef SPDK_CONFIG_LTO 00:33:16.427 #define SPDK_CONFIG_MAX_LCORES 00:33:16.427 #define SPDK_CONFIG_NVME_CUSE 1 00:33:16.427 #undef SPDK_CONFIG_OCF 00:33:16.427 #define SPDK_CONFIG_OCF_PATH 00:33:16.427 #define SPDK_CONFIG_OPENSSL_PATH 00:33:16.427 #undef SPDK_CONFIG_PGO_CAPTURE 00:33:16.427 #define SPDK_CONFIG_PGO_DIR 00:33:16.427 #undef SPDK_CONFIG_PGO_USE 00:33:16.427 #define SPDK_CONFIG_PREFIX /usr/local 00:33:16.427 #undef SPDK_CONFIG_RAID5F 00:33:16.428 #undef SPDK_CONFIG_RBD 00:33:16.428 #define SPDK_CONFIG_RDMA 1 00:33:16.428 #define SPDK_CONFIG_RDMA_PROV verbs 00:33:16.428 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:33:16.428 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:33:16.428 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:33:16.428 #define SPDK_CONFIG_SHARED 1 00:33:16.428 #undef SPDK_CONFIG_SMA 00:33:16.428 #define SPDK_CONFIG_TESTS 1 00:33:16.428 #undef SPDK_CONFIG_TSAN 00:33:16.428 #define SPDK_CONFIG_UBLK 1 00:33:16.428 #define SPDK_CONFIG_UBSAN 1 00:33:16.428 #undef SPDK_CONFIG_UNIT_TESTS 00:33:16.428 #undef SPDK_CONFIG_URING 00:33:16.428 #define SPDK_CONFIG_URING_PATH 00:33:16.428 #undef SPDK_CONFIG_URING_ZNS 00:33:16.428 #define SPDK_CONFIG_USDT 1 00:33:16.428 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:33:16.428 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:33:16.428 #undef SPDK_CONFIG_VFIO_USER 00:33:16.428 #define SPDK_CONFIG_VFIO_USER_DIR 00:33:16.428 #define SPDK_CONFIG_VHOST 1 00:33:16.428 #define SPDK_CONFIG_VIRTIO 1 00:33:16.428 #undef SPDK_CONFIG_VTUNE 00:33:16.428 #define SPDK_CONFIG_VTUNE_DIR 00:33:16.428 #define SPDK_CONFIG_WERROR 1 00:33:16.428 #define SPDK_CONFIG_WPDK_DIR 00:33:16.428 #undef SPDK_CONFIG_XNVME 00:33:16.428 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@57 -- # : 1 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # : 0 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 1 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 0 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 0 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 0 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 1 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 0 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 0 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 0 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # : tcp 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # : 0 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:33:16.428 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 0 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 1 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # : /home/vagrant/spdk_repo/dpdk/build 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 0 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 0 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # : v22.11.4 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # : true 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 0 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # : 1 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # : 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # : 0 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 0 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 0 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 1 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # : 1 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # cat 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:33:16.429 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # export valgrind= 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # valgrind= 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # uname -s 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@278 -- # MAKE=make 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j10 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # TEST_MODE= 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # for i in "$@" 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # case "$i" in 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=tcp 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # [[ -z 77277 ]] 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # kill -0 77277 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local mount target_dir 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.5zLREg 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.5zLREg/tests/target /tmp/spdk.5zLREg 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # df -T 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=devtmpfs 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=4194304 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=4194304 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=6264516608 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=6267891712 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=3375104 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=2494353408 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=2507157504 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=12804096 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/vda5 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=btrfs 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=13195022336 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=20314062848 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=5850066944 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/vda5 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=btrfs 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=13195022336 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=20314062848 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=5850066944 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=6267752448 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=6267891712 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=139264 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/vda2 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=ext4 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=843546624 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=1012768768 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=100016128 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/vda3 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=vfat 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=92499968 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=104607744 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=12107776 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=1253572608 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=1253576704 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=fuse.sshfs 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=93128024064 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=105088212992 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=6574755840 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:33:16.430 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:33:16.431 * Looking for test storage... 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@367 -- # local target_space new_size 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # mount=/home 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@373 -- # target_space=13195022336 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ btrfs == tmpfs ]] 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ btrfs == ramfs ]] 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ /home == / ]] 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:33:16.431 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # return 0 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:16.431 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:33:16.432 Cannot find device "nvmf_tgt_br" 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # true 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:33:16.432 Cannot find device "nvmf_tgt_br2" 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # true 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:33:16.432 Cannot find device "nvmf_tgt_br" 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # true 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:33:16.432 Cannot find device "nvmf_tgt_br2" 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # true 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:16.432 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:16.432 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:33:16.432 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:16.432 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:33:16.432 00:33:16.432 --- 10.0.0.2 ping statistics --- 00:33:16.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:16.432 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:33:16.432 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:16.432 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:33:16.432 00:33:16.432 --- 10.0.0.3 ping statistics --- 00:33:16.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:16.432 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:16.432 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:16.432 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:33:16.432 00:33:16.432 --- 10.0.0.1 ping statistics --- 00:33:16.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:16.432 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@433 -- # return 0 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:16.432 14:46:34 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:33:16.432 ************************************ 00:33:16.432 START TEST nvmf_filesystem_no_in_capsule 00:33:16.432 ************************************ 00:33:16.432 14:46:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:33:16.432 14:46:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:33:16.432 14:46:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:33:16.432 14:46:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:16.432 14:46:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:16.432 14:46:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:33:16.432 14:46:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=77438 00:33:16.432 14:46:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:33:16.432 14:46:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 77438 00:33:16.432 14:46:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 77438 ']' 00:33:16.432 14:46:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:16.432 14:46:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:16.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:16.432 14:46:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:16.432 14:46:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:16.432 14:46:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:33:16.432 [2024-07-22 14:46:35.068044] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:16.432 [2024-07-22 14:46:35.068123] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:16.432 [2024-07-22 14:46:35.208702] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:16.432 [2024-07-22 14:46:35.265057] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:16.432 [2024-07-22 14:46:35.265103] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:16.432 [2024-07-22 14:46:35.265110] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:16.432 [2024-07-22 14:46:35.265115] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:16.432 [2024-07-22 14:46:35.265119] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:16.432 [2024-07-22 14:46:35.265429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:16.432 [2024-07-22 14:46:35.265539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:16.432 [2024-07-22 14:46:35.265708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:16.432 [2024-07-22 14:46:35.266644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:16.432 14:46:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:16.432 14:46:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:33:16.432 14:46:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:16.432 14:46:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:16.432 14:46:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:33:16.432 14:46:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:16.433 14:46:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:33:16.433 14:46:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:33:16.433 14:46:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.433 14:46:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:33:16.433 [2024-07-22 14:46:36.007448] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:16.433 14:46:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.433 14:46:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:33:16.433 14:46:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.433 14:46:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:33:16.692 Malloc1 00:33:16.692 14:46:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.692 14:46:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:16.692 14:46:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.692 14:46:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:33:16.692 14:46:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.692 14:46:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:33:16.692 14:46:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.692 14:46:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:33:16.692 14:46:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.692 14:46:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:16.692 14:46:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.692 14:46:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:33:16.692 [2024-07-22 14:46:36.174348] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:16.692 14:46:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.692 14:46:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:33:16.692 14:46:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:33:16.692 14:46:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:33:16.692 14:46:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:33:16.692 14:46:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:33:16.692 14:46:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:33:16.692 14:46:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.692 14:46:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:33:16.692 14:46:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.692 14:46:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:33:16.692 { 00:33:16.692 "aliases": [ 00:33:16.692 "03102a05-0db9-4a27-9d1d-e09c6ef76992" 00:33:16.692 ], 00:33:16.692 "assigned_rate_limits": { 00:33:16.692 "r_mbytes_per_sec": 0, 00:33:16.692 "rw_ios_per_sec": 0, 00:33:16.692 "rw_mbytes_per_sec": 0, 00:33:16.692 "w_mbytes_per_sec": 0 00:33:16.692 }, 00:33:16.692 "block_size": 512, 00:33:16.692 "claim_type": "exclusive_write", 00:33:16.692 "claimed": true, 00:33:16.692 "driver_specific": {}, 00:33:16.692 "memory_domains": [ 00:33:16.692 { 00:33:16.692 "dma_device_id": "system", 00:33:16.692 "dma_device_type": 1 00:33:16.692 }, 00:33:16.692 { 00:33:16.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:16.692 "dma_device_type": 2 00:33:16.692 } 00:33:16.692 ], 00:33:16.692 "name": "Malloc1", 00:33:16.692 "num_blocks": 1048576, 00:33:16.692 "product_name": "Malloc disk", 00:33:16.692 "supported_io_types": { 00:33:16.692 "abort": true, 00:33:16.692 "compare": false, 00:33:16.692 "compare_and_write": false, 00:33:16.692 "flush": true, 00:33:16.692 "nvme_admin": false, 00:33:16.692 "nvme_io": false, 00:33:16.692 "read": true, 00:33:16.692 "reset": true, 00:33:16.692 "unmap": true, 00:33:16.692 "write": true, 00:33:16.692 "write_zeroes": true 00:33:16.692 }, 00:33:16.692 "uuid": "03102a05-0db9-4a27-9d1d-e09c6ef76992", 00:33:16.692 "zoned": false 00:33:16.692 } 00:33:16.692 ]' 00:33:16.692 14:46:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:33:16.692 14:46:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:33:16.692 14:46:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:33:16.692 14:46:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:33:16.692 14:46:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:33:16.692 14:46:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:33:16.692 14:46:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:33:16.692 14:46:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:16.950 14:46:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:33:16.950 14:46:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:33:16.950 14:46:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:33:16.950 14:46:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:33:16.950 14:46:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:33:18.856 14:46:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:33:18.856 14:46:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:33:18.856 14:46:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:33:18.856 14:46:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:33:18.856 14:46:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:33:18.856 14:46:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:33:18.856 14:46:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:33:18.856 14:46:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:33:18.856 14:46:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:33:19.115 14:46:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:33:19.115 14:46:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:33:19.115 14:46:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:19.115 14:46:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:33:19.115 14:46:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:33:19.115 14:46:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:33:19.115 14:46:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:33:19.115 14:46:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:33:19.115 14:46:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:33:19.115 14:46:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:33:20.062 14:46:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:33:20.062 14:46:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:33:20.062 14:46:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:33:20.062 14:46:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:20.062 14:46:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:33:20.062 ************************************ 00:33:20.062 START TEST filesystem_ext4 00:33:20.062 ************************************ 00:33:20.062 14:46:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:33:20.062 14:46:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:33:20.062 14:46:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:33:20.062 14:46:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:33:20.062 14:46:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:33:20.062 14:46:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:33:20.062 14:46:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:33:20.062 14:46:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local force 00:33:20.062 14:46:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:33:20.062 14:46:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:33:20.062 14:46:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:33:20.062 mke2fs 1.46.5 (30-Dec-2021) 00:33:20.321 Discarding device blocks: 0/522240 done 00:33:20.321 Creating filesystem with 522240 1k blocks and 130560 inodes 00:33:20.321 Filesystem UUID: a3952597-c093-4803-9c78-5a9a0f664a06 00:33:20.321 Superblock backups stored on blocks: 00:33:20.321 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:33:20.321 00:33:20.321 Allocating group tables: 0/64 done 00:33:20.321 Writing inode tables: 0/64 done 00:33:20.321 Creating journal (8192 blocks): done 00:33:20.321 Writing superblocks and filesystem accounting information: 0/64 done 00:33:20.321 00:33:20.321 14:46:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # return 0 00:33:20.321 14:46:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:33:20.321 14:46:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:33:20.321 14:46:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:33:20.321 14:46:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:33:20.321 14:46:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:33:20.321 14:46:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:33:20.321 14:46:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:33:20.321 14:46:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 77438 00:33:20.321 14:46:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:33:20.321 14:46:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:33:20.580 14:46:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:33:20.580 14:46:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:33:20.580 00:33:20.580 real 0m0.343s 00:33:20.580 user 0m0.026s 00:33:20.580 sys 0m0.069s 00:33:20.580 14:46:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:20.580 14:46:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:33:20.580 ************************************ 00:33:20.580 END TEST filesystem_ext4 00:33:20.580 ************************************ 00:33:20.580 14:46:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:33:20.580 14:46:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:33:20.580 14:46:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:20.580 14:46:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:33:20.580 ************************************ 00:33:20.580 START TEST filesystem_btrfs 00:33:20.580 ************************************ 00:33:20.580 14:46:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:33:20.580 14:46:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:33:20.580 14:46:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:33:20.580 14:46:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:33:20.580 14:46:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:33:20.580 14:46:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:33:20.580 14:46:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:33:20.580 14:46:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local force 00:33:20.580 14:46:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:33:20.580 14:46:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:33:20.580 14:46:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:33:20.580 btrfs-progs v6.6.2 00:33:20.580 See https://btrfs.readthedocs.io for more information. 00:33:20.580 00:33:20.580 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:33:20.580 NOTE: several default settings have changed in version 5.15, please make sure 00:33:20.580 this does not affect your deployments: 00:33:20.580 - DUP for metadata (-m dup) 00:33:20.580 - enabled no-holes (-O no-holes) 00:33:20.580 - enabled free-space-tree (-R free-space-tree) 00:33:20.580 00:33:20.580 Label: (null) 00:33:20.580 UUID: ca505d30-45e3-4332-914e-03f6c01fc4a3 00:33:20.580 Node size: 16384 00:33:20.580 Sector size: 4096 00:33:20.580 Filesystem size: 510.00MiB 00:33:20.580 Block group profiles: 00:33:20.580 Data: single 8.00MiB 00:33:20.580 Metadata: DUP 32.00MiB 00:33:20.580 System: DUP 8.00MiB 00:33:20.580 SSD detected: yes 00:33:20.580 Zoned device: no 00:33:20.580 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:33:20.580 Runtime features: free-space-tree 00:33:20.580 Checksum: crc32c 00:33:20.580 Number of devices: 1 00:33:20.580 Devices: 00:33:20.580 ID SIZE PATH 00:33:20.580 1 510.00MiB /dev/nvme0n1p1 00:33:20.580 00:33:20.580 14:46:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # return 0 00:33:20.580 14:46:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:33:20.580 14:46:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:33:20.580 14:46:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:33:20.580 14:46:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:33:20.580 14:46:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:33:20.580 14:46:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:33:20.580 14:46:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:33:20.840 14:46:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 77438 00:33:20.840 14:46:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:33:20.840 14:46:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:33:20.840 14:46:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:33:20.840 14:46:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:33:20.840 00:33:20.840 real 0m0.236s 00:33:20.840 user 0m0.022s 00:33:20.840 sys 0m0.079s 00:33:20.840 14:46:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:20.840 14:46:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:33:20.840 ************************************ 00:33:20.840 END TEST filesystem_btrfs 00:33:20.840 ************************************ 00:33:20.840 14:46:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:33:20.840 14:46:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:33:20.840 14:46:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:20.840 14:46:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:33:20.840 ************************************ 00:33:20.840 START TEST filesystem_xfs 00:33:20.840 ************************************ 00:33:20.840 14:46:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:33:20.840 14:46:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:33:20.840 14:46:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:33:20.840 14:46:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:33:20.840 14:46:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:33:20.840 14:46:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:33:20.840 14:46:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local i=0 00:33:20.840 14:46:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local force 00:33:20.840 14:46:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:33:20.840 14:46:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # force=-f 00:33:20.840 14:46:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:33:20.840 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:33:20.840 = sectsz=512 attr=2, projid32bit=1 00:33:20.840 = crc=1 finobt=1, sparse=1, rmapbt=0 00:33:20.840 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:33:20.840 data = bsize=4096 blocks=130560, imaxpct=25 00:33:20.840 = sunit=0 swidth=0 blks 00:33:20.840 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:33:20.840 log =internal log bsize=4096 blocks=16384, version=2 00:33:20.840 = sectsz=512 sunit=0 blks, lazy-count=1 00:33:20.840 realtime =none extsz=4096 blocks=0, rtextents=0 00:33:21.776 Discarding blocks...Done. 00:33:21.776 14:46:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # return 0 00:33:21.776 14:46:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:33:23.698 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:33:23.698 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:33:23.698 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:33:23.698 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:33:23.957 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:33:23.957 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:33:23.957 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 77438 00:33:23.957 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:33:23.957 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:33:23.957 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:33:23.957 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:33:23.957 00:33:23.957 real 0m3.062s 00:33:23.957 user 0m0.027s 00:33:23.957 sys 0m0.068s 00:33:23.957 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:23.957 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:33:23.957 ************************************ 00:33:23.957 END TEST filesystem_xfs 00:33:23.957 ************************************ 00:33:23.957 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:33:23.957 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:33:23.957 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:23.957 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:23.957 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:23.957 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:33:23.957 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:23.957 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:33:23.957 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:23.957 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:33:23.957 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:33:23.957 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:23.957 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.957 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:33:23.957 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.957 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:33:23.957 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 77438 00:33:23.957 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 77438 ']' 00:33:23.957 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # kill -0 77438 00:33:23.957 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # uname 00:33:23.957 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:23.957 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77438 00:33:23.957 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:23.957 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:23.957 killing process with pid 77438 00:33:23.957 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77438' 00:33:23.957 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # kill 77438 00:33:23.957 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # wait 77438 00:33:24.526 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:33:24.526 00:33:24.526 real 0m8.912s 00:33:24.526 user 0m34.305s 00:33:24.526 sys 0m1.205s 00:33:24.526 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:24.526 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:33:24.526 ************************************ 00:33:24.526 END TEST nvmf_filesystem_no_in_capsule 00:33:24.526 ************************************ 00:33:24.526 14:46:43 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:33:24.526 14:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:24.526 14:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:24.526 14:46:43 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:33:24.526 ************************************ 00:33:24.526 START TEST nvmf_filesystem_in_capsule 00:33:24.526 ************************************ 00:33:24.526 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:33:24.526 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:33:24.526 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:33:24.526 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:24.526 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:24.526 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:33:24.526 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=77751 00:33:24.526 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:33:24.526 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 77751 00:33:24.526 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 77751 ']' 00:33:24.526 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:24.526 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:24.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:24.526 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:24.526 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:24.526 14:46:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:33:24.526 [2024-07-22 14:46:44.044922] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:24.526 [2024-07-22 14:46:44.045011] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:24.786 [2024-07-22 14:46:44.184208] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:24.786 [2024-07-22 14:46:44.237219] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:24.786 [2024-07-22 14:46:44.237290] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:24.786 [2024-07-22 14:46:44.237297] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:24.786 [2024-07-22 14:46:44.237302] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:24.786 [2024-07-22 14:46:44.237307] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:24.786 [2024-07-22 14:46:44.237441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:24.786 [2024-07-22 14:46:44.237654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:24.786 [2024-07-22 14:46:44.237782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:24.787 [2024-07-22 14:46:44.237788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:25.367 14:46:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:25.367 14:46:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:33:25.368 14:46:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:25.368 14:46:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:25.368 14:46:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:33:25.368 14:46:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:25.368 14:46:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:33:25.368 14:46:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:33:25.368 14:46:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.368 14:46:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:33:25.627 [2024-07-22 14:46:45.000676] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:25.627 14:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.627 14:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:33:25.627 14:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.627 14:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:33:25.627 Malloc1 00:33:25.627 14:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.627 14:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:25.627 14:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.627 14:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:33:25.627 14:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.627 14:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:33:25.627 14:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.627 14:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:33:25.627 14:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.627 14:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:25.627 14:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.627 14:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:33:25.627 [2024-07-22 14:46:45.170228] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:25.627 14:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.628 14:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:33:25.628 14:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:33:25.628 14:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:33:25.628 14:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:33:25.628 14:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:33:25.628 14:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:33:25.628 14:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:25.628 14:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:33:25.628 14:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:25.628 14:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:33:25.628 { 00:33:25.628 "aliases": [ 00:33:25.628 "3f5733dd-739c-4cd5-a697-9070985afa05" 00:33:25.628 ], 00:33:25.628 "assigned_rate_limits": { 00:33:25.628 "r_mbytes_per_sec": 0, 00:33:25.628 "rw_ios_per_sec": 0, 00:33:25.628 "rw_mbytes_per_sec": 0, 00:33:25.628 "w_mbytes_per_sec": 0 00:33:25.628 }, 00:33:25.628 "block_size": 512, 00:33:25.628 "claim_type": "exclusive_write", 00:33:25.628 "claimed": true, 00:33:25.628 "driver_specific": {}, 00:33:25.628 "memory_domains": [ 00:33:25.628 { 00:33:25.628 "dma_device_id": "system", 00:33:25.628 "dma_device_type": 1 00:33:25.628 }, 00:33:25.628 { 00:33:25.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:25.628 "dma_device_type": 2 00:33:25.628 } 00:33:25.628 ], 00:33:25.628 "name": "Malloc1", 00:33:25.628 "num_blocks": 1048576, 00:33:25.628 "product_name": "Malloc disk", 00:33:25.628 "supported_io_types": { 00:33:25.628 "abort": true, 00:33:25.628 "compare": false, 00:33:25.628 "compare_and_write": false, 00:33:25.628 "flush": true, 00:33:25.628 "nvme_admin": false, 00:33:25.628 "nvme_io": false, 00:33:25.628 "read": true, 00:33:25.628 "reset": true, 00:33:25.628 "unmap": true, 00:33:25.628 "write": true, 00:33:25.628 "write_zeroes": true 00:33:25.628 }, 00:33:25.628 "uuid": "3f5733dd-739c-4cd5-a697-9070985afa05", 00:33:25.628 "zoned": false 00:33:25.628 } 00:33:25.628 ]' 00:33:25.628 14:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:33:25.628 14:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:33:25.628 14:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:33:25.887 14:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:33:25.887 14:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:33:25.887 14:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:33:25.887 14:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:33:25.887 14:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:25.887 14:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:33:25.888 14:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:33:25.888 14:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:33:25.888 14:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:33:25.888 14:46:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:33:28.425 14:46:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:33:28.425 14:46:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:33:28.425 14:46:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:33:28.425 14:46:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:33:28.425 14:46:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:33:28.425 14:46:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:33:28.425 14:46:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:33:28.425 14:46:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:33:28.425 14:46:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:33:28.425 14:46:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:33:28.425 14:46:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:33:28.425 14:46:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:28.425 14:46:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:33:28.425 14:46:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:33:28.425 14:46:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:33:28.425 14:46:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:33:28.425 14:46:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:33:28.425 14:46:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:33:28.425 14:46:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:33:28.994 14:46:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:33:28.994 14:46:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:33:28.994 14:46:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:33:28.994 14:46:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:28.994 14:46:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:33:28.994 ************************************ 00:33:28.994 START TEST filesystem_in_capsule_ext4 00:33:28.994 ************************************ 00:33:28.994 14:46:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:33:28.994 14:46:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:33:28.994 14:46:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:33:28.994 14:46:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:33:28.994 14:46:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:33:28.994 14:46:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:33:28.995 14:46:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:33:28.995 14:46:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local force 00:33:28.995 14:46:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:33:28.995 14:46:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:33:28.995 14:46:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:33:28.995 mke2fs 1.46.5 (30-Dec-2021) 00:33:29.254 Discarding device blocks: 0/522240 done 00:33:29.254 Creating filesystem with 522240 1k blocks and 130560 inodes 00:33:29.254 Filesystem UUID: 3591e1c3-0178-4456-9f56-b429c8e70c29 00:33:29.254 Superblock backups stored on blocks: 00:33:29.254 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:33:29.254 00:33:29.254 Allocating group tables: 0/64 done 00:33:29.254 Writing inode tables: 0/64 done 00:33:29.254 Creating journal (8192 blocks): done 00:33:29.254 Writing superblocks and filesystem accounting information: 0/64 done 00:33:29.254 00:33:29.254 14:46:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # return 0 00:33:29.254 14:46:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:33:29.254 14:46:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:33:29.513 14:46:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:33:29.513 14:46:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:33:29.513 14:46:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:33:29.513 14:46:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:33:29.513 14:46:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:33:29.513 14:46:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 77751 00:33:29.513 14:46:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:33:29.513 14:46:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:33:29.513 14:46:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:33:29.513 14:46:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:33:29.513 00:33:29.513 real 0m0.410s 00:33:29.513 user 0m0.018s 00:33:29.513 sys 0m0.071s 00:33:29.513 14:46:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:29.513 14:46:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:33:29.513 ************************************ 00:33:29.513 END TEST filesystem_in_capsule_ext4 00:33:29.513 ************************************ 00:33:29.513 14:46:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:33:29.513 14:46:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:33:29.513 14:46:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:29.513 14:46:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:33:29.513 ************************************ 00:33:29.513 START TEST filesystem_in_capsule_btrfs 00:33:29.513 ************************************ 00:33:29.513 14:46:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:33:29.513 14:46:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:33:29.513 14:46:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:33:29.513 14:46:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:33:29.513 14:46:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:33:29.513 14:46:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:33:29.513 14:46:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:33:29.513 14:46:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local force 00:33:29.513 14:46:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:33:29.513 14:46:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:33:29.513 14:46:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:33:29.773 btrfs-progs v6.6.2 00:33:29.773 See https://btrfs.readthedocs.io for more information. 00:33:29.773 00:33:29.773 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:33:29.773 NOTE: several default settings have changed in version 5.15, please make sure 00:33:29.773 this does not affect your deployments: 00:33:29.773 - DUP for metadata (-m dup) 00:33:29.773 - enabled no-holes (-O no-holes) 00:33:29.773 - enabled free-space-tree (-R free-space-tree) 00:33:29.773 00:33:29.773 Label: (null) 00:33:29.773 UUID: 1485918f-2ab2-4e75-9f71-103e57cd7eb4 00:33:29.773 Node size: 16384 00:33:29.773 Sector size: 4096 00:33:29.773 Filesystem size: 510.00MiB 00:33:29.773 Block group profiles: 00:33:29.773 Data: single 8.00MiB 00:33:29.773 Metadata: DUP 32.00MiB 00:33:29.773 System: DUP 8.00MiB 00:33:29.773 SSD detected: yes 00:33:29.773 Zoned device: no 00:33:29.773 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:33:29.773 Runtime features: free-space-tree 00:33:29.773 Checksum: crc32c 00:33:29.773 Number of devices: 1 00:33:29.773 Devices: 00:33:29.773 ID SIZE PATH 00:33:29.773 1 510.00MiB /dev/nvme0n1p1 00:33:29.773 00:33:29.773 14:46:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # return 0 00:33:29.773 14:46:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:33:29.773 14:46:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:33:29.773 14:46:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:33:29.773 14:46:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:33:29.773 14:46:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:33:29.773 14:46:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:33:29.773 14:46:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:33:29.773 14:46:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 77751 00:33:29.773 14:46:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:33:29.773 14:46:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:33:29.773 14:46:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:33:29.773 14:46:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:33:29.773 00:33:29.773 real 0m0.222s 00:33:29.773 user 0m0.030s 00:33:29.773 sys 0m0.084s 00:33:29.773 14:46:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:29.774 14:46:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:33:29.774 ************************************ 00:33:29.774 END TEST filesystem_in_capsule_btrfs 00:33:29.774 ************************************ 00:33:29.774 14:46:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:33:29.774 14:46:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:33:29.774 14:46:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:29.774 14:46:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:33:29.774 ************************************ 00:33:29.774 START TEST filesystem_in_capsule_xfs 00:33:29.774 ************************************ 00:33:29.774 14:46:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:33:29.774 14:46:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:33:29.774 14:46:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:33:29.774 14:46:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:33:29.774 14:46:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:33:29.774 14:46:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:33:29.774 14:46:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local i=0 00:33:29.774 14:46:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local force 00:33:29.774 14:46:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:33:29.774 14:46:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # force=-f 00:33:29.774 14:46:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:33:30.034 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:33:30.034 = sectsz=512 attr=2, projid32bit=1 00:33:30.034 = crc=1 finobt=1, sparse=1, rmapbt=0 00:33:30.034 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:33:30.034 data = bsize=4096 blocks=130560, imaxpct=25 00:33:30.034 = sunit=0 swidth=0 blks 00:33:30.034 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:33:30.034 log =internal log bsize=4096 blocks=16384, version=2 00:33:30.034 = sectsz=512 sunit=0 blks, lazy-count=1 00:33:30.034 realtime =none extsz=4096 blocks=0, rtextents=0 00:33:30.604 Discarding blocks...Done. 00:33:30.604 14:46:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # return 0 00:33:30.604 14:46:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:33:32.515 14:46:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:33:32.515 14:46:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:33:32.515 14:46:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:33:32.515 14:46:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:33:32.515 14:46:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:33:32.515 14:46:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:33:32.515 14:46:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 77751 00:33:32.515 14:46:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:33:32.515 14:46:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:33:32.515 14:46:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:33:32.515 14:46:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:33:32.515 00:33:32.515 real 0m2.604s 00:33:32.515 user 0m0.034s 00:33:32.515 sys 0m0.066s 00:33:32.515 14:46:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:32.515 14:46:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:33:32.515 ************************************ 00:33:32.515 END TEST filesystem_in_capsule_xfs 00:33:32.515 ************************************ 00:33:32.515 14:46:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:33:32.515 14:46:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:33:32.515 14:46:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:32.515 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:32.515 14:46:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:32.515 14:46:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:33:32.515 14:46:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:33:32.515 14:46:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:32.515 14:46:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:33:32.515 14:46:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:32.515 14:46:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:33:32.515 14:46:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:32.515 14:46:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.515 14:46:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:33:32.515 14:46:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.515 14:46:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:33:32.515 14:46:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 77751 00:33:32.515 14:46:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 77751 ']' 00:33:32.515 14:46:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # kill -0 77751 00:33:32.515 14:46:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # uname 00:33:32.515 14:46:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:32.515 14:46:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77751 00:33:32.775 14:46:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:32.775 14:46:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:32.775 killing process with pid 77751 00:33:32.775 14:46:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77751' 00:33:32.775 14:46:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # kill 77751 00:33:32.775 14:46:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # wait 77751 00:33:33.038 14:46:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:33:33.038 00:33:33.038 real 0m8.524s 00:33:33.038 user 0m32.889s 00:33:33.038 sys 0m1.237s 00:33:33.038 14:46:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:33.038 14:46:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:33:33.038 ************************************ 00:33:33.038 END TEST nvmf_filesystem_in_capsule 00:33:33.038 ************************************ 00:33:33.038 14:46:52 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:33:33.038 14:46:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:33.038 14:46:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:33:33.038 14:46:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:33.038 14:46:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:33:33.038 14:46:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:33.038 14:46:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:33.038 rmmod nvme_tcp 00:33:33.038 rmmod nvme_fabrics 00:33:33.038 rmmod nvme_keyring 00:33:33.038 14:46:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:33.296 14:46:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:33:33.296 14:46:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:33:33.297 14:46:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:33:33.297 14:46:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:33.297 14:46:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:33.297 14:46:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:33.297 14:46:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:33.297 14:46:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:33.297 14:46:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:33.297 14:46:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:33.297 14:46:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:33.297 14:46:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:33:33.297 00:33:33.297 real 0m18.411s 00:33:33.297 user 1m7.506s 00:33:33.297 sys 0m2.921s 00:33:33.297 14:46:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:33.297 14:46:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:33:33.297 ************************************ 00:33:33.297 END TEST nvmf_filesystem 00:33:33.297 ************************************ 00:33:33.297 14:46:52 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:33:33.297 14:46:52 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:33.297 14:46:52 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:33.297 14:46:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:33.297 ************************************ 00:33:33.297 START TEST nvmf_target_discovery 00:33:33.297 ************************************ 00:33:33.297 14:46:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:33:33.555 * Looking for test storage... 00:33:33.555 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:33.555 14:46:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:33:33.555 14:46:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:33:33.555 Cannot find device "nvmf_tgt_br" 00:33:33.555 14:46:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # true 00:33:33.555 14:46:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:33:33.555 Cannot find device "nvmf_tgt_br2" 00:33:33.555 14:46:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # true 00:33:33.555 14:46:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:33:33.555 14:46:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:33:33.555 Cannot find device "nvmf_tgt_br" 00:33:33.555 14:46:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # true 00:33:33.555 14:46:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:33:33.555 Cannot find device "nvmf_tgt_br2" 00:33:33.555 14:46:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # true 00:33:33.555 14:46:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:33:33.555 14:46:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:33:33.555 14:46:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:33.555 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:33.555 14:46:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:33:33.555 14:46:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:33.555 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:33.555 14:46:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:33:33.555 14:46:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:33:33.555 14:46:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:33.555 14:46:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:33.555 14:46:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:33.837 14:46:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:33.837 14:46:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:33.837 14:46:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:33.837 14:46:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:33:33.837 14:46:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:33:33.837 14:46:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:33:33.838 14:46:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:33:33.838 14:46:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:33:33.838 14:46:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:33:33.838 14:46:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:33.838 14:46:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:33.838 14:46:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:33.838 14:46:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:33:33.838 14:46:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:33:33.838 14:46:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:33:33.838 14:46:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:33.838 14:46:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:33.838 14:46:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:33.838 14:46:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:33.838 14:46:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:33:33.838 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:33.838 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:33:33.838 00:33:33.838 --- 10.0.0.2 ping statistics --- 00:33:33.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:33.838 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:33:33.838 14:46:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:33:33.838 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:33.838 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:33:33.838 00:33:33.838 --- 10.0.0.3 ping statistics --- 00:33:33.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:33.838 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:33:33.838 14:46:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:33.838 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:33.838 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:33:33.838 00:33:33.838 --- 10.0.0.1 ping statistics --- 00:33:33.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:33.838 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:33:33.838 14:46:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:33.838 14:46:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@433 -- # return 0 00:33:33.838 14:46:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:33.838 14:46:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:33.838 14:46:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:33.838 14:46:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:33.838 14:46:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:33.838 14:46:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:33.838 14:46:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:33.838 14:46:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:33:33.838 14:46:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:33.838 14:46:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:33.838 14:46:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:33.838 14:46:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=78197 00:33:33.838 14:46:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:33:33.838 14:46:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 78197 00:33:33.838 14:46:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@827 -- # '[' -z 78197 ']' 00:33:33.838 14:46:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:33.838 14:46:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:33.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:33.838 14:46:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:33.838 14:46:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:33.838 14:46:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:33.838 [2024-07-22 14:46:53.409798] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:33.838 [2024-07-22 14:46:53.409863] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:34.097 [2024-07-22 14:46:53.549812] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:34.097 [2024-07-22 14:46:53.601266] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:34.097 [2024-07-22 14:46:53.601406] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:34.097 [2024-07-22 14:46:53.601457] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:34.097 [2024-07-22 14:46:53.601512] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:34.097 [2024-07-22 14:46:53.601529] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:34.097 [2024-07-22 14:46:53.601791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:34.097 [2024-07-22 14:46:53.601884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:34.097 [2024-07-22 14:46:53.602100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:34.097 [2024-07-22 14:46:53.602102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:34.666 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:34.666 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@860 -- # return 0 00:33:34.666 14:46:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:34.666 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:34.666 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:34.925 14:46:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:34.925 14:46:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:34.925 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.925 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:34.925 [2024-07-22 14:46:54.349969] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:34.925 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.925 14:46:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:33:34.925 14:46:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:33:34.925 14:46:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:33:34.925 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.925 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:34.925 Null1 00:33:34.925 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.925 14:46:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:34.925 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.925 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:34.925 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.925 14:46:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:33:34.925 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.925 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:34.925 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.925 14:46:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:34.925 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.925 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:34.925 [2024-07-22 14:46:54.407497] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:34.925 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.925 14:46:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:33:34.925 14:46:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:33:34.925 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.925 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:34.925 Null2 00:33:34.925 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.925 14:46:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:33:34.925 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.926 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:34.926 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.926 14:46:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:33:34.926 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.926 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:34.926 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.926 14:46:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:34.926 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.926 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:34.926 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.926 14:46:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:33:34.926 14:46:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:33:34.926 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.926 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:34.926 Null3 00:33:34.926 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.926 14:46:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:33:34.926 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.926 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:34.926 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.926 14:46:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:33:34.926 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.926 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:34.926 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.926 14:46:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:33:34.926 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.926 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:34.926 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.926 14:46:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:33:34.926 14:46:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:33:34.926 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.926 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:34.926 Null4 00:33:34.926 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.926 14:46:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:33:34.926 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.926 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:34.926 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.926 14:46:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:33:34.926 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.926 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:34.926 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.926 14:46:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:33:34.926 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.926 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:34.926 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.926 14:46:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:34.926 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.926 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:34.926 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.926 14:46:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:33:34.926 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.926 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:34.926 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.926 14:46:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -t tcp -a 10.0.0.2 -s 4420 00:33:35.186 00:33:35.186 Discovery Log Number of Records 6, Generation counter 6 00:33:35.186 =====Discovery Log Entry 0====== 00:33:35.186 trtype: tcp 00:33:35.186 adrfam: ipv4 00:33:35.186 subtype: current discovery subsystem 00:33:35.186 treq: not required 00:33:35.186 portid: 0 00:33:35.186 trsvcid: 4420 00:33:35.186 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:35.186 traddr: 10.0.0.2 00:33:35.186 eflags: explicit discovery connections, duplicate discovery information 00:33:35.186 sectype: none 00:33:35.186 =====Discovery Log Entry 1====== 00:33:35.186 trtype: tcp 00:33:35.186 adrfam: ipv4 00:33:35.186 subtype: nvme subsystem 00:33:35.186 treq: not required 00:33:35.186 portid: 0 00:33:35.186 trsvcid: 4420 00:33:35.186 subnqn: nqn.2016-06.io.spdk:cnode1 00:33:35.186 traddr: 10.0.0.2 00:33:35.186 eflags: none 00:33:35.186 sectype: none 00:33:35.186 =====Discovery Log Entry 2====== 00:33:35.186 trtype: tcp 00:33:35.186 adrfam: ipv4 00:33:35.186 subtype: nvme subsystem 00:33:35.186 treq: not required 00:33:35.186 portid: 0 00:33:35.186 trsvcid: 4420 00:33:35.186 subnqn: nqn.2016-06.io.spdk:cnode2 00:33:35.186 traddr: 10.0.0.2 00:33:35.186 eflags: none 00:33:35.186 sectype: none 00:33:35.186 =====Discovery Log Entry 3====== 00:33:35.186 trtype: tcp 00:33:35.186 adrfam: ipv4 00:33:35.186 subtype: nvme subsystem 00:33:35.186 treq: not required 00:33:35.186 portid: 0 00:33:35.186 trsvcid: 4420 00:33:35.186 subnqn: nqn.2016-06.io.spdk:cnode3 00:33:35.186 traddr: 10.0.0.2 00:33:35.186 eflags: none 00:33:35.186 sectype: none 00:33:35.186 =====Discovery Log Entry 4====== 00:33:35.186 trtype: tcp 00:33:35.186 adrfam: ipv4 00:33:35.186 subtype: nvme subsystem 00:33:35.186 treq: not required 00:33:35.186 portid: 0 00:33:35.186 trsvcid: 4420 00:33:35.186 subnqn: nqn.2016-06.io.spdk:cnode4 00:33:35.186 traddr: 10.0.0.2 00:33:35.186 eflags: none 00:33:35.186 sectype: none 00:33:35.186 =====Discovery Log Entry 5====== 00:33:35.186 trtype: tcp 00:33:35.186 adrfam: ipv4 00:33:35.186 subtype: discovery subsystem referral 00:33:35.186 treq: not required 00:33:35.186 portid: 0 00:33:35.186 trsvcid: 4430 00:33:35.186 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:35.186 traddr: 10.0.0.2 00:33:35.186 eflags: none 00:33:35.186 sectype: none 00:33:35.186 Perform nvmf subsystem discovery via RPC 00:33:35.186 14:46:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:33:35.186 14:46:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:33:35.186 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.186 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:35.186 [ 00:33:35.186 { 00:33:35.186 "allow_any_host": true, 00:33:35.186 "hosts": [], 00:33:35.186 "listen_addresses": [ 00:33:35.186 { 00:33:35.186 "adrfam": "IPv4", 00:33:35.186 "traddr": "10.0.0.2", 00:33:35.186 "trsvcid": "4420", 00:33:35.186 "trtype": "TCP" 00:33:35.186 } 00:33:35.186 ], 00:33:35.186 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:33:35.186 "subtype": "Discovery" 00:33:35.186 }, 00:33:35.186 { 00:33:35.186 "allow_any_host": true, 00:33:35.186 "hosts": [], 00:33:35.186 "listen_addresses": [ 00:33:35.186 { 00:33:35.186 "adrfam": "IPv4", 00:33:35.186 "traddr": "10.0.0.2", 00:33:35.186 "trsvcid": "4420", 00:33:35.186 "trtype": "TCP" 00:33:35.186 } 00:33:35.186 ], 00:33:35.186 "max_cntlid": 65519, 00:33:35.186 "max_namespaces": 32, 00:33:35.186 "min_cntlid": 1, 00:33:35.186 "model_number": "SPDK bdev Controller", 00:33:35.186 "namespaces": [ 00:33:35.186 { 00:33:35.186 "bdev_name": "Null1", 00:33:35.186 "name": "Null1", 00:33:35.186 "nguid": "B8B22FBAB35C4840814266A1C541B6EB", 00:33:35.186 "nsid": 1, 00:33:35.186 "uuid": "b8b22fba-b35c-4840-8142-66a1c541b6eb" 00:33:35.186 } 00:33:35.186 ], 00:33:35.186 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:33:35.186 "serial_number": "SPDK00000000000001", 00:33:35.186 "subtype": "NVMe" 00:33:35.186 }, 00:33:35.186 { 00:33:35.186 "allow_any_host": true, 00:33:35.186 "hosts": [], 00:33:35.186 "listen_addresses": [ 00:33:35.186 { 00:33:35.186 "adrfam": "IPv4", 00:33:35.186 "traddr": "10.0.0.2", 00:33:35.186 "trsvcid": "4420", 00:33:35.186 "trtype": "TCP" 00:33:35.186 } 00:33:35.186 ], 00:33:35.186 "max_cntlid": 65519, 00:33:35.186 "max_namespaces": 32, 00:33:35.186 "min_cntlid": 1, 00:33:35.186 "model_number": "SPDK bdev Controller", 00:33:35.186 "namespaces": [ 00:33:35.186 { 00:33:35.186 "bdev_name": "Null2", 00:33:35.187 "name": "Null2", 00:33:35.187 "nguid": "E9C1565726054A178431A1093447D828", 00:33:35.187 "nsid": 1, 00:33:35.187 "uuid": "e9c15657-2605-4a17-8431-a1093447d828" 00:33:35.187 } 00:33:35.187 ], 00:33:35.187 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:33:35.187 "serial_number": "SPDK00000000000002", 00:33:35.187 "subtype": "NVMe" 00:33:35.187 }, 00:33:35.187 { 00:33:35.187 "allow_any_host": true, 00:33:35.187 "hosts": [], 00:33:35.187 "listen_addresses": [ 00:33:35.187 { 00:33:35.187 "adrfam": "IPv4", 00:33:35.187 "traddr": "10.0.0.2", 00:33:35.187 "trsvcid": "4420", 00:33:35.187 "trtype": "TCP" 00:33:35.187 } 00:33:35.187 ], 00:33:35.187 "max_cntlid": 65519, 00:33:35.187 "max_namespaces": 32, 00:33:35.187 "min_cntlid": 1, 00:33:35.187 "model_number": "SPDK bdev Controller", 00:33:35.187 "namespaces": [ 00:33:35.187 { 00:33:35.187 "bdev_name": "Null3", 00:33:35.187 "name": "Null3", 00:33:35.187 "nguid": "B00C9789FAAA4284A637182369B3EB34", 00:33:35.187 "nsid": 1, 00:33:35.187 "uuid": "b00c9789-faaa-4284-a637-182369b3eb34" 00:33:35.187 } 00:33:35.187 ], 00:33:35.187 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:33:35.187 "serial_number": "SPDK00000000000003", 00:33:35.187 "subtype": "NVMe" 00:33:35.187 }, 00:33:35.187 { 00:33:35.187 "allow_any_host": true, 00:33:35.187 "hosts": [], 00:33:35.187 "listen_addresses": [ 00:33:35.187 { 00:33:35.187 "adrfam": "IPv4", 00:33:35.187 "traddr": "10.0.0.2", 00:33:35.187 "trsvcid": "4420", 00:33:35.187 "trtype": "TCP" 00:33:35.187 } 00:33:35.187 ], 00:33:35.187 "max_cntlid": 65519, 00:33:35.187 "max_namespaces": 32, 00:33:35.187 "min_cntlid": 1, 00:33:35.187 "model_number": "SPDK bdev Controller", 00:33:35.187 "namespaces": [ 00:33:35.187 { 00:33:35.187 "bdev_name": "Null4", 00:33:35.187 "name": "Null4", 00:33:35.187 "nguid": "9C2D5B021B464456B336E30C4ECFF331", 00:33:35.187 "nsid": 1, 00:33:35.187 "uuid": "9c2d5b02-1b46-4456-b336-e30c4ecff331" 00:33:35.187 } 00:33:35.187 ], 00:33:35.187 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:33:35.187 "serial_number": "SPDK00000000000004", 00:33:35.187 "subtype": "NVMe" 00:33:35.187 } 00:33:35.187 ] 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:35.187 14:46:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:35.187 rmmod nvme_tcp 00:33:35.187 rmmod nvme_fabrics 00:33:35.447 rmmod nvme_keyring 00:33:35.447 14:46:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:35.447 14:46:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:33:35.447 14:46:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:33:35.447 14:46:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 78197 ']' 00:33:35.447 14:46:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 78197 00:33:35.447 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@946 -- # '[' -z 78197 ']' 00:33:35.447 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@950 -- # kill -0 78197 00:33:35.447 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # uname 00:33:35.447 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:35.447 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78197 00:33:35.447 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:35.447 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:35.447 killing process with pid 78197 00:33:35.447 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78197' 00:33:35.447 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # kill 78197 00:33:35.447 14:46:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@970 -- # wait 78197 00:33:35.447 14:46:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:35.447 14:46:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:35.447 14:46:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:35.447 14:46:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:35.447 14:46:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:35.447 14:46:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:35.447 14:46:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:35.447 14:46:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:35.707 14:46:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:33:35.707 ************************************ 00:33:35.707 END TEST nvmf_target_discovery 00:33:35.707 ************************************ 00:33:35.707 00:33:35.707 real 0m2.299s 00:33:35.707 user 0m6.156s 00:33:35.707 sys 0m0.638s 00:33:35.707 14:46:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:35.707 14:46:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:35.707 14:46:55 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:33:35.707 14:46:55 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:35.707 14:46:55 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:35.707 14:46:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:35.707 ************************************ 00:33:35.707 START TEST nvmf_referrals 00:33:35.707 ************************************ 00:33:35.707 14:46:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:33:35.707 * Looking for test storage... 00:33:35.707 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:33:35.707 14:46:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:35.707 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:33:35.707 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:35.707 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:35.707 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:35.707 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:35.707 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:35.707 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:35.707 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:35.707 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:35.707 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:35.707 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:35.707 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:33:35.707 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:33:35.707 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:35.707 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:35.707 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:35.707 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:35.707 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:35.707 14:46:55 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:35.707 14:46:55 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:35.707 14:46:55 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:35.707 14:46:55 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.707 14:46:55 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.707 14:46:55 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.707 14:46:55 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:33:35.707 14:46:55 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.707 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:33:35.707 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:35.707 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:35.707 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:35.707 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:35.707 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:35.707 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:35.707 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:35.707 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:35.967 14:46:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:33:35.967 14:46:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:33:35.967 14:46:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:33:35.967 14:46:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:33:35.967 14:46:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:33:35.967 14:46:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:33:35.967 14:46:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:33:35.967 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:35.967 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:35.967 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:35.967 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:35.967 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:35.967 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:35.967 14:46:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:35.967 14:46:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:35.967 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:33:35.967 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:33:35.967 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:33:35.967 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:33:35.967 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:33:35.967 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@432 -- # nvmf_veth_init 00:33:35.967 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:35.967 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:35.967 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:33:35.967 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:33:35.967 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:35.968 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:35.968 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:35.968 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:35.968 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:35.968 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:35.968 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:35.968 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:35.968 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:33:35.968 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:33:35.968 Cannot find device "nvmf_tgt_br" 00:33:35.968 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # true 00:33:35.968 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:33:35.968 Cannot find device "nvmf_tgt_br2" 00:33:35.968 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # true 00:33:35.968 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:33:35.968 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:33:35.968 Cannot find device "nvmf_tgt_br" 00:33:35.968 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # true 00:33:35.968 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:33:35.968 Cannot find device "nvmf_tgt_br2" 00:33:35.968 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # true 00:33:35.968 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:33:35.968 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:33:35.968 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:35.968 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:35.968 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:33:35.968 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:35.968 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:35.968 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:33:35.968 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:33:35.968 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:35.968 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:35.968 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:35.968 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:35.968 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:35.968 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:35.968 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:33:35.968 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:33:36.228 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:33:36.228 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:33:36.228 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:33:36.228 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:33:36.228 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:36.228 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:36.228 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:36.228 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:33:36.228 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:33:36.228 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:33:36.228 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:36.228 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:36.228 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:36.228 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:36.228 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:33:36.228 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:36.228 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:33:36.228 00:33:36.228 --- 10.0.0.2 ping statistics --- 00:33:36.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:36.228 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:33:36.228 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:33:36.228 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:36.228 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:33:36.228 00:33:36.228 --- 10.0.0.3 ping statistics --- 00:33:36.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:36.228 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:33:36.228 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:36.228 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:36.228 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:33:36.228 00:33:36.228 --- 10.0.0.1 ping statistics --- 00:33:36.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:36.228 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:33:36.228 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:36.228 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@433 -- # return 0 00:33:36.228 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:36.228 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:36.228 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:36.228 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:36.228 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:36.228 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:36.228 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:36.228 14:46:55 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:33:36.228 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:36.228 14:46:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:36.228 14:46:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:33:36.228 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=78428 00:33:36.228 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:33:36.228 14:46:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 78428 00:33:36.228 14:46:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@827 -- # '[' -z 78428 ']' 00:33:36.228 14:46:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:36.228 14:46:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:36.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:36.228 14:46:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:36.228 14:46:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:36.228 14:46:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:33:36.228 [2024-07-22 14:46:55.809268] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:36.228 [2024-07-22 14:46:55.809346] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:36.488 [2024-07-22 14:46:55.951159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:36.488 [2024-07-22 14:46:56.005139] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:36.488 [2024-07-22 14:46:56.005197] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:36.488 [2024-07-22 14:46:56.005203] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:36.488 [2024-07-22 14:46:56.005208] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:36.488 [2024-07-22 14:46:56.005212] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:36.488 [2024-07-22 14:46:56.005361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:36.488 [2024-07-22 14:46:56.005548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:36.488 [2024-07-22 14:46:56.006470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:36.488 [2024-07-22 14:46:56.006472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:37.059 14:46:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:37.059 14:46:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@860 -- # return 0 00:33:37.059 14:46:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:37.059 14:46:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:37.059 14:46:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:33:37.318 14:46:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:37.318 14:46:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:37.318 14:46:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.318 14:46:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:33:37.318 [2024-07-22 14:46:56.742353] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:37.318 14:46:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:37.318 14:46:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:33:37.318 14:46:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.318 14:46:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:33:37.318 [2024-07-22 14:46:56.771399] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:37.318 14:46:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:37.318 14:46:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:33:37.318 14:46:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.318 14:46:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:33:37.318 14:46:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:37.318 14:46:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:33:37.318 14:46:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.318 14:46:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:33:37.318 14:46:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:37.318 14:46:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:33:37.318 14:46:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.318 14:46:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:33:37.318 14:46:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:37.318 14:46:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:33:37.318 14:46:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:33:37.319 14:46:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.319 14:46:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:33:37.319 14:46:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:37.319 14:46:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:33:37.319 14:46:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:33:37.319 14:46:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:33:37.319 14:46:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:33:37.319 14:46:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:33:37.319 14:46:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.319 14:46:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:33:37.319 14:46:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:33:37.319 14:46:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:37.319 14:46:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:33:37.319 14:46:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:33:37.319 14:46:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:33:37.319 14:46:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:33:37.319 14:46:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:33:37.319 14:46:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:33:37.319 14:46:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -t tcp -a 10.0.0.2 -s 8009 -o json 00:33:37.319 14:46:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:33:37.578 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:33:37.578 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:33:37.578 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:33:37.578 14:46:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.578 14:46:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:33:37.578 14:46:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:37.578 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:33:37.578 14:46:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.578 14:46:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:33:37.578 14:46:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:37.578 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:33:37.578 14:46:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.578 14:46:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:33:37.578 14:46:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:37.578 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:33:37.578 14:46:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.578 14:46:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:33:37.578 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:33:37.578 14:46:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:37.578 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:33:37.578 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:33:37.578 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:33:37.578 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:33:37.578 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:33:37.578 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:33:37.578 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -t tcp -a 10.0.0.2 -s 8009 -o json 00:33:37.578 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:33:37.578 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:33:37.578 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:33:37.578 14:46:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.578 14:46:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:33:37.578 14:46:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:37.578 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:33:37.578 14:46:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.578 14:46:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:33:37.578 14:46:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:37.578 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:33:37.578 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:33:37.578 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:33:37.578 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:33:37.578 14:46:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.578 14:46:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:33:37.578 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:33:37.578 14:46:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:37.837 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:33:37.837 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:33:37.837 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:33:37.837 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:33:37.837 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:33:37.837 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -t tcp -a 10.0.0.2 -s 8009 -o json 00:33:37.837 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:33:37.837 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:33:37.837 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:33:37.837 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:33:37.837 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:33:37.837 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:33:37.837 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:33:37.837 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -t tcp -a 10.0.0.2 -s 8009 -o json 00:33:37.837 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:33:37.837 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:33:37.837 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:33:37.837 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:33:37.837 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:33:37.837 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -t tcp -a 10.0.0.2 -s 8009 -o json 00:33:37.837 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:33:37.837 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:33:37.837 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:33:37.837 14:46:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.837 14:46:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:33:37.837 14:46:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:37.837 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:33:37.837 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:33:37.837 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:33:37.837 14:46:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.837 14:46:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:33:37.837 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:33:37.837 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:33:38.096 14:46:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:38.096 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:33:38.096 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:33:38.096 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:33:38.096 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:33:38.096 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:33:38.096 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -t tcp -a 10.0.0.2 -s 8009 -o json 00:33:38.096 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:33:38.096 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:33:38.096 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:33:38.096 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:33:38.096 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:33:38.096 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:33:38.096 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:33:38.096 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -t tcp -a 10.0.0.2 -s 8009 -o json 00:33:38.096 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:33:38.096 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:33:38.096 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:33:38.096 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:33:38.096 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:33:38.096 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -t tcp -a 10.0.0.2 -s 8009 -o json 00:33:38.096 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:33:38.096 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:33:38.096 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:33:38.096 14:46:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:38.096 14:46:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:33:38.096 14:46:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:38.097 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:33:38.097 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:33:38.097 14:46:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:38.097 14:46:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:33:38.355 14:46:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:38.355 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:33:38.355 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:33:38.355 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:33:38.356 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:33:38.356 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -t tcp -a 10.0.0.2 -s 8009 -o json 00:33:38.356 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:33:38.356 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:33:38.356 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:33:38.356 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:33:38.356 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:33:38.356 14:46:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:33:38.356 14:46:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:38.356 14:46:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:33:38.356 14:46:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:38.356 14:46:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:33:38.356 14:46:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:38.356 14:46:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:38.356 rmmod nvme_tcp 00:33:38.356 rmmod nvme_fabrics 00:33:38.356 rmmod nvme_keyring 00:33:38.356 14:46:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:38.356 14:46:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:33:38.356 14:46:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:33:38.356 14:46:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 78428 ']' 00:33:38.356 14:46:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 78428 00:33:38.356 14:46:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@946 -- # '[' -z 78428 ']' 00:33:38.356 14:46:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@950 -- # kill -0 78428 00:33:38.356 14:46:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # uname 00:33:38.356 14:46:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:38.356 14:46:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78428 00:33:38.356 14:46:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:38.356 14:46:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:38.356 killing process with pid 78428 00:33:38.356 14:46:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78428' 00:33:38.356 14:46:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # kill 78428 00:33:38.356 14:46:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@970 -- # wait 78428 00:33:38.615 14:46:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:38.615 14:46:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:38.615 14:46:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:38.615 14:46:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:38.615 14:46:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:38.615 14:46:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:38.615 14:46:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:38.615 14:46:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:38.615 14:46:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:33:38.615 00:33:38.615 real 0m3.037s 00:33:38.615 user 0m9.534s 00:33:38.615 sys 0m0.935s 00:33:38.615 14:46:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:38.615 14:46:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:33:38.615 ************************************ 00:33:38.615 END TEST nvmf_referrals 00:33:38.615 ************************************ 00:33:38.874 14:46:58 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:33:38.874 14:46:58 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:38.874 14:46:58 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:38.874 14:46:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:38.874 ************************************ 00:33:38.874 START TEST nvmf_connect_disconnect 00:33:38.874 ************************************ 00:33:38.874 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:33:38.874 * Looking for test storage... 00:33:38.874 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:33:38.874 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:38.874 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:33:38.874 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:38.874 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:38.874 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:38.874 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:38.874 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:38.874 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:38.874 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:38.874 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:38.874 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:38.874 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:38.874 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:33:38.874 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:33:38.874 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:38.874 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:38.874 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:38.874 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:38.874 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:38.874 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:38.874 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:38.874 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:38.874 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.874 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.874 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.874 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:33:38.874 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.874 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:33:38.874 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:38.874 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:38.874 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:38.874 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:38.874 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:38.874 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:38.874 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:38.874 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:38.874 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:38.874 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:38.874 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:33:38.874 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:38.874 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:38.875 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:38.875 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:38.875 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:38.875 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:38.875 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:38.875 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:38.875 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:33:38.875 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:33:38.875 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:33:38.875 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:33:38.875 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:33:38.875 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # nvmf_veth_init 00:33:38.875 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:38.875 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:38.875 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:33:38.875 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:33:38.875 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:38.875 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:38.875 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:38.875 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:38.875 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:38.875 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:38.875 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:38.875 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:38.875 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:33:38.875 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:33:38.875 Cannot find device "nvmf_tgt_br" 00:33:38.875 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # true 00:33:38.875 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:33:39.134 Cannot find device "nvmf_tgt_br2" 00:33:39.134 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # true 00:33:39.134 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:33:39.134 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:33:39.134 Cannot find device "nvmf_tgt_br" 00:33:39.134 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # true 00:33:39.134 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:33:39.134 Cannot find device "nvmf_tgt_br2" 00:33:39.134 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # true 00:33:39.134 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:33:39.134 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:33:39.134 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:39.134 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:39.134 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:33:39.134 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:39.134 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:39.134 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:33:39.134 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:33:39.134 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:39.134 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:39.134 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:39.134 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:39.134 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:39.134 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:39.134 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:33:39.134 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:33:39.134 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:33:39.134 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:33:39.134 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:33:39.134 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:33:39.134 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:39.134 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:39.134 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:39.134 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:33:39.393 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:33:39.393 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:33:39.393 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:39.393 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:39.393 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:39.393 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:39.393 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:33:39.393 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:39.393 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:33:39.393 00:33:39.393 --- 10.0.0.2 ping statistics --- 00:33:39.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:39.393 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:33:39.393 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:33:39.393 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:39.393 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.130 ms 00:33:39.393 00:33:39.393 --- 10.0.0.3 ping statistics --- 00:33:39.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:39.393 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:33:39.393 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:39.393 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:39.393 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:33:39.393 00:33:39.393 --- 10.0.0.1 ping statistics --- 00:33:39.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:39.393 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:33:39.393 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:39.393 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@433 -- # return 0 00:33:39.393 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:39.393 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:39.393 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:39.393 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:39.393 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:39.394 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:39.394 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:39.394 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:33:39.394 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:39.394 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:39.394 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:39.394 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=78732 00:33:39.394 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 78732 00:33:39.394 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # '[' -z 78732 ']' 00:33:39.394 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:39.394 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:39.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:39.394 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:39.394 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:39.394 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:39.394 14:46:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:33:39.394 [2024-07-22 14:46:58.925933] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:39.394 [2024-07-22 14:46:58.926010] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:39.669 [2024-07-22 14:46:59.068816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:39.670 [2024-07-22 14:46:59.122076] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:39.670 [2024-07-22 14:46:59.122205] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:39.670 [2024-07-22 14:46:59.122240] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:39.670 [2024-07-22 14:46:59.122266] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:39.670 [2024-07-22 14:46:59.122281] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:39.670 [2024-07-22 14:46:59.122601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:39.670 [2024-07-22 14:46:59.122737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:39.670 [2024-07-22 14:46:59.122742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:39.670 [2024-07-22 14:46:59.122628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:40.236 14:46:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:40.236 14:46:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # return 0 00:33:40.237 14:46:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:40.237 14:46:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:40.237 14:46:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:40.237 14:46:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:40.237 14:46:59 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:33:40.237 14:46:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:40.237 14:46:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:40.237 [2024-07-22 14:46:59.842343] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:40.237 14:46:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:40.237 14:46:59 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:33:40.237 14:46:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:40.237 14:46:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:40.496 14:46:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:40.496 14:46:59 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:33:40.496 14:46:59 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:40.496 14:46:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:40.496 14:46:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:40.496 14:46:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:40.496 14:46:59 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:40.496 14:46:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:40.496 14:46:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:40.496 14:46:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:40.496 14:46:59 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:40.496 14:46:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:40.496 14:46:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:40.496 [2024-07-22 14:46:59.907158] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:40.496 14:46:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:40.496 14:46:59 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:33:40.496 14:46:59 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:33:40.496 14:46:59 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:33:40.496 14:46:59 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:33:43.029 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:44.932 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:47.473 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:49.378 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:51.963 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:53.868 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:55.801 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:58.337 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:00.874 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:02.779 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:04.684 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:07.222 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:09.764 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:11.669 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:13.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:16.175 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:18.082 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:20.627 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:22.536 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:25.074 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:26.979 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:29.535 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:31.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:34.014 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:35.920 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:38.456 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:40.362 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:42.900 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:44.807 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:47.354 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:49.315 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:51.874 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:53.781 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:56.365 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:58.302 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:00.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:02.764 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:05.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:07.205 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:09.815 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:11.722 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:14.260 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:16.165 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:18.699 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:20.600 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:23.130 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:25.028 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:27.574 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:29.491 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:32.028 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:33.938 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:36.478 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:38.385 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:40.923 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:42.827 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:45.360 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:47.264 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:49.794 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:51.716 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:54.250 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:56.151 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:58.715 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:00.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:03.188 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:05.090 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:07.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:09.549 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:12.084 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:13.999 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:16.547 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:18.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:20.991 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:22.898 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:25.452 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:27.359 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:29.896 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:31.802 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:34.340 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:36.261 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:38.812 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:40.719 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:43.256 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:45.185 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:47.723 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:49.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:52.159 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:54.066 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:56.605 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:58.513 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:01.049 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:02.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:05.492 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:07.420 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:09.982 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:11.925 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:14.456 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:16.401 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:18.932 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:20.838 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:23.371 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:23.371 14:50:42 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:37:23.371 14:50:42 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:37:23.371 14:50:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:23.371 14:50:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:37:23.371 14:50:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:23.371 14:50:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:37:23.371 14:50:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:23.371 14:50:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:23.371 rmmod nvme_tcp 00:37:23.371 rmmod nvme_fabrics 00:37:23.371 rmmod nvme_keyring 00:37:23.371 14:50:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:23.371 14:50:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:37:23.371 14:50:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:37:23.371 14:50:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 78732 ']' 00:37:23.371 14:50:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 78732 00:37:23.371 14:50:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # '[' -z 78732 ']' 00:37:23.371 14:50:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # kill -0 78732 00:37:23.371 14:50:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # uname 00:37:23.371 14:50:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:23.371 14:50:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78732 00:37:23.372 killing process with pid 78732 00:37:23.372 14:50:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:37:23.372 14:50:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:37:23.372 14:50:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78732' 00:37:23.372 14:50:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # kill 78732 00:37:23.372 14:50:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # wait 78732 00:37:23.372 14:50:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:23.372 14:50:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:23.372 14:50:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:23.372 14:50:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:23.372 14:50:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:23.372 14:50:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:23.372 14:50:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:23.372 14:50:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:23.372 14:50:42 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:37:23.372 00:37:23.372 real 3m44.601s 00:37:23.372 user 14m44.827s 00:37:23.372 sys 0m15.357s 00:37:23.372 14:50:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:23.372 ************************************ 00:37:23.372 END TEST nvmf_connect_disconnect 00:37:23.372 ************************************ 00:37:23.372 14:50:42 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:23.372 14:50:42 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:37:23.372 14:50:42 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:37:23.372 14:50:42 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:23.372 14:50:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:23.372 ************************************ 00:37:23.372 START TEST nvmf_multitarget 00:37:23.372 ************************************ 00:37:23.372 14:50:42 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:37:23.630 * Looking for test storage... 00:37:23.630 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@432 -- # nvmf_veth_init 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:37:23.630 Cannot find device "nvmf_tgt_br" 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # true 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:37:23.630 Cannot find device "nvmf_tgt_br2" 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # true 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:37:23.630 Cannot find device "nvmf_tgt_br" 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # true 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:37:23.630 Cannot find device "nvmf_tgt_br2" 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # true 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:37:23.630 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:37:23.630 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:37:23.888 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:23.888 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:37:23.888 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:37:23.888 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:37:23.888 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:37:23.888 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:37:23.888 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:37:23.888 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:37:23.888 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:37:23.888 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:37:23.888 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:37:23.888 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:37:23.888 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:37:23.888 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:37:23.888 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:37:23.888 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:37:23.888 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:37:23.888 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:37:23.888 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:37:23.888 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:37:23.888 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:37:23.888 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:37:23.888 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:37:23.888 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:37:23.888 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:37:23.888 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:37:23.888 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:23.888 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:37:23.888 00:37:23.888 --- 10.0.0.2 ping statistics --- 00:37:23.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:23.888 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:37:23.888 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:37:23.888 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:37:23.888 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:37:23.888 00:37:23.888 --- 10.0.0.3 ping statistics --- 00:37:23.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:23.888 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:37:23.888 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:37:23.888 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:23.888 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:37:23.888 00:37:23.888 --- 10.0.0.1 ping statistics --- 00:37:23.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:23.888 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:37:23.888 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:23.888 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@433 -- # return 0 00:37:23.888 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:23.888 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:23.888 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:23.889 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:23.889 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:23.889 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:23.889 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:23.889 14:50:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:37:23.889 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:23.889 14:50:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@720 -- # xtrace_disable 00:37:23.889 14:50:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:37:24.147 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=82493 00:37:24.147 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:37:24.147 14:50:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 82493 00:37:24.147 14:50:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@827 -- # '[' -z 82493 ']' 00:37:24.147 14:50:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:24.147 14:50:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:24.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:24.147 14:50:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:24.147 14:50:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:24.147 14:50:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:37:24.147 [2024-07-22 14:50:43.571087] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:37:24.147 [2024-07-22 14:50:43.571139] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:24.147 [2024-07-22 14:50:43.696735] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:24.147 [2024-07-22 14:50:43.745555] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:24.147 [2024-07-22 14:50:43.745691] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:24.147 [2024-07-22 14:50:43.745724] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:24.147 [2024-07-22 14:50:43.745749] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:24.147 [2024-07-22 14:50:43.745763] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:24.147 [2024-07-22 14:50:43.746076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:24.147 [2024-07-22 14:50:43.746144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:37:24.147 [2024-07-22 14:50:43.746299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:24.147 [2024-07-22 14:50:43.746300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:37:25.084 14:50:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:25.084 14:50:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@860 -- # return 0 00:37:25.084 14:50:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:25.084 14:50:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:25.084 14:50:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:37:25.084 14:50:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:25.084 14:50:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:37:25.084 14:50:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:37:25.084 14:50:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:37:25.084 14:50:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:37:25.084 14:50:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:37:25.084 "nvmf_tgt_1" 00:37:25.084 14:50:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:37:25.343 "nvmf_tgt_2" 00:37:25.343 14:50:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:37:25.343 14:50:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:37:25.343 14:50:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:37:25.343 14:50:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:37:25.601 true 00:37:25.601 14:50:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:37:25.601 true 00:37:25.601 14:50:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:37:25.601 14:50:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:37:25.601 14:50:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:37:25.601 14:50:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:37:25.601 14:50:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:37:25.601 14:50:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:25.601 14:50:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:37:25.860 14:50:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:25.860 14:50:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:37:25.860 14:50:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:25.860 14:50:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:25.860 rmmod nvme_tcp 00:37:25.860 rmmod nvme_fabrics 00:37:25.860 rmmod nvme_keyring 00:37:25.860 14:50:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:25.860 14:50:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:37:25.860 14:50:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:37:25.860 14:50:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 82493 ']' 00:37:25.860 14:50:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 82493 00:37:25.860 14:50:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@946 -- # '[' -z 82493 ']' 00:37:25.860 14:50:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@950 -- # kill -0 82493 00:37:25.860 14:50:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # uname 00:37:25.860 14:50:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:25.860 14:50:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 82493 00:37:25.860 14:50:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:37:25.860 14:50:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:37:25.860 killing process with pid 82493 00:37:25.860 14:50:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@964 -- # echo 'killing process with pid 82493' 00:37:25.860 14:50:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # kill 82493 00:37:25.860 14:50:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@970 -- # wait 82493 00:37:26.119 14:50:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:26.119 14:50:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:26.119 14:50:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:26.119 14:50:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:26.119 14:50:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:26.119 14:50:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:26.119 14:50:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:26.119 14:50:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:26.119 14:50:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:37:26.119 00:37:26.119 real 0m2.660s 00:37:26.119 user 0m8.172s 00:37:26.119 sys 0m0.750s 00:37:26.119 14:50:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:26.119 14:50:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:37:26.119 ************************************ 00:37:26.119 END TEST nvmf_multitarget 00:37:26.119 ************************************ 00:37:26.119 14:50:45 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:37:26.119 14:50:45 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:37:26.119 14:50:45 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:26.119 14:50:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:26.119 ************************************ 00:37:26.119 START TEST nvmf_rpc 00:37:26.119 ************************************ 00:37:26.119 14:50:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:37:26.379 * Looking for test storage... 00:37:26.379 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:37:26.379 14:50:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:37:26.379 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:37:26.379 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:26.379 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:26.379 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:26.379 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:26.379 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:26.379 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:26.379 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:26.379 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:26.379 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:26.379 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:26.379 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:37:26.379 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:37:26.379 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:26.379 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:26.379 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:37:26.379 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:26.379 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:26.379 14:50:45 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:26.379 14:50:45 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:26.379 14:50:45 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:26.379 14:50:45 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:26.379 14:50:45 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:26.379 14:50:45 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:26.379 14:50:45 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:37:26.379 14:50:45 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:26.379 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:37:26.379 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:26.379 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:26.379 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:26.379 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:26.380 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:26.380 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:26.380 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:26.380 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:26.380 14:50:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:37:26.380 14:50:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:37:26.380 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:26.380 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:26.380 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:26.380 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:26.380 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:26.380 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:26.380 14:50:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:26.380 14:50:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:26.380 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:37:26.380 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:37:26.380 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:37:26.380 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:37:26.380 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:37:26.380 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:37:26.380 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:26.380 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:26.380 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:37:26.380 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:37:26.380 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:37:26.380 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:37:26.380 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:37:26.380 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:26.380 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:37:26.380 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:37:26.380 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:37:26.380 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:37:26.380 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:37:26.380 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:37:26.380 Cannot find device "nvmf_tgt_br" 00:37:26.380 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # true 00:37:26.380 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:37:26.380 Cannot find device "nvmf_tgt_br2" 00:37:26.380 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # true 00:37:26.380 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:37:26.380 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:37:26.380 Cannot find device "nvmf_tgt_br" 00:37:26.380 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # true 00:37:26.380 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:37:26.380 Cannot find device "nvmf_tgt_br2" 00:37:26.380 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # true 00:37:26.380 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:37:26.380 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:37:26.380 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:37:26.380 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:26.380 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:37:26.380 14:50:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:37:26.380 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:26.380 14:50:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:37:26.380 14:50:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:37:26.639 14:50:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:37:26.639 14:50:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:37:26.639 14:50:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:37:26.639 14:50:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:37:26.639 14:50:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:37:26.639 14:50:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:37:26.639 14:50:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:37:26.639 14:50:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:37:26.639 14:50:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:37:26.639 14:50:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:37:26.639 14:50:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:37:26.639 14:50:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:37:26.639 14:50:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:37:26.639 14:50:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:37:26.639 14:50:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:37:26.639 14:50:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:37:26.639 14:50:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:37:26.639 14:50:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:37:26.639 14:50:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:37:26.639 14:50:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:37:26.639 14:50:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:37:26.639 14:50:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:37:26.639 14:50:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:37:26.639 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:26.639 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:37:26.639 00:37:26.639 --- 10.0.0.2 ping statistics --- 00:37:26.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:26.639 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:37:26.639 14:50:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:37:26.639 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:37:26.639 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:37:26.639 00:37:26.639 --- 10.0.0.3 ping statistics --- 00:37:26.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:26.639 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:37:26.639 14:50:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:37:26.639 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:26.639 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:37:26.639 00:37:26.639 --- 10.0.0.1 ping statistics --- 00:37:26.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:26.639 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:37:26.639 14:50:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:26.639 14:50:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@433 -- # return 0 00:37:26.639 14:50:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:26.639 14:50:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:26.639 14:50:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:26.639 14:50:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:26.639 14:50:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:26.639 14:50:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:26.639 14:50:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:26.898 14:50:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:37:26.898 14:50:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:26.898 14:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@720 -- # xtrace_disable 00:37:26.898 14:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:26.898 14:50:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=82731 00:37:26.898 14:50:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:37:26.898 14:50:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 82731 00:37:26.898 14:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@827 -- # '[' -z 82731 ']' 00:37:26.898 14:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:26.898 14:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:26.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:26.898 14:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:26.898 14:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:26.898 14:50:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:26.898 [2024-07-22 14:50:46.349030] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:37:26.898 [2024-07-22 14:50:46.349100] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:26.898 [2024-07-22 14:50:46.485758] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:27.202 [2024-07-22 14:50:46.532433] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:27.202 [2024-07-22 14:50:46.532484] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:27.202 [2024-07-22 14:50:46.532490] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:27.202 [2024-07-22 14:50:46.532495] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:27.202 [2024-07-22 14:50:46.532499] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:27.202 [2024-07-22 14:50:46.532758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:27.202 [2024-07-22 14:50:46.532894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:37:27.202 [2024-07-22 14:50:46.532889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:27.202 [2024-07-22 14:50:46.532814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:37:27.768 14:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:27.768 14:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@860 -- # return 0 00:37:27.768 14:50:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:27.768 14:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:27.768 14:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:27.768 14:50:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:27.768 14:50:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:37:27.768 14:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:27.769 14:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:27.769 14:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:27.769 14:50:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:37:27.769 "poll_groups": [ 00:37:27.769 { 00:37:27.769 "admin_qpairs": 0, 00:37:27.769 "completed_nvme_io": 0, 00:37:27.769 "current_admin_qpairs": 0, 00:37:27.769 "current_io_qpairs": 0, 00:37:27.769 "io_qpairs": 0, 00:37:27.769 "name": "nvmf_tgt_poll_group_000", 00:37:27.769 "pending_bdev_io": 0, 00:37:27.769 "transports": [] 00:37:27.769 }, 00:37:27.769 { 00:37:27.769 "admin_qpairs": 0, 00:37:27.769 "completed_nvme_io": 0, 00:37:27.769 "current_admin_qpairs": 0, 00:37:27.769 "current_io_qpairs": 0, 00:37:27.769 "io_qpairs": 0, 00:37:27.769 "name": "nvmf_tgt_poll_group_001", 00:37:27.769 "pending_bdev_io": 0, 00:37:27.769 "transports": [] 00:37:27.769 }, 00:37:27.769 { 00:37:27.769 "admin_qpairs": 0, 00:37:27.769 "completed_nvme_io": 0, 00:37:27.769 "current_admin_qpairs": 0, 00:37:27.769 "current_io_qpairs": 0, 00:37:27.769 "io_qpairs": 0, 00:37:27.769 "name": "nvmf_tgt_poll_group_002", 00:37:27.769 "pending_bdev_io": 0, 00:37:27.769 "transports": [] 00:37:27.769 }, 00:37:27.769 { 00:37:27.769 "admin_qpairs": 0, 00:37:27.769 "completed_nvme_io": 0, 00:37:27.769 "current_admin_qpairs": 0, 00:37:27.769 "current_io_qpairs": 0, 00:37:27.769 "io_qpairs": 0, 00:37:27.769 "name": "nvmf_tgt_poll_group_003", 00:37:27.769 "pending_bdev_io": 0, 00:37:27.769 "transports": [] 00:37:27.769 } 00:37:27.769 ], 00:37:27.769 "tick_rate": 2290000000 00:37:27.769 }' 00:37:27.769 14:50:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:37:27.769 14:50:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:37:27.769 14:50:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:37:27.769 14:50:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:37:27.769 14:50:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:37:27.769 14:50:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:37:27.769 14:50:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:37:27.769 14:50:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:27.769 14:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:27.769 14:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:27.769 [2024-07-22 14:50:47.356082] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:27.769 14:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:27.769 14:50:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:37:27.769 14:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:27.769 14:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:27.769 14:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:27.769 14:50:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:37:27.769 "poll_groups": [ 00:37:27.769 { 00:37:27.769 "admin_qpairs": 0, 00:37:27.769 "completed_nvme_io": 0, 00:37:27.769 "current_admin_qpairs": 0, 00:37:27.769 "current_io_qpairs": 0, 00:37:27.769 "io_qpairs": 0, 00:37:27.769 "name": "nvmf_tgt_poll_group_000", 00:37:27.769 "pending_bdev_io": 0, 00:37:27.769 "transports": [ 00:37:27.769 { 00:37:27.769 "trtype": "TCP" 00:37:27.769 } 00:37:27.769 ] 00:37:27.769 }, 00:37:27.769 { 00:37:27.769 "admin_qpairs": 0, 00:37:27.769 "completed_nvme_io": 0, 00:37:27.769 "current_admin_qpairs": 0, 00:37:27.769 "current_io_qpairs": 0, 00:37:27.769 "io_qpairs": 0, 00:37:27.769 "name": "nvmf_tgt_poll_group_001", 00:37:27.769 "pending_bdev_io": 0, 00:37:27.769 "transports": [ 00:37:27.769 { 00:37:27.769 "trtype": "TCP" 00:37:27.769 } 00:37:27.769 ] 00:37:27.769 }, 00:37:27.769 { 00:37:27.769 "admin_qpairs": 0, 00:37:27.769 "completed_nvme_io": 0, 00:37:27.769 "current_admin_qpairs": 0, 00:37:27.769 "current_io_qpairs": 0, 00:37:27.769 "io_qpairs": 0, 00:37:27.769 "name": "nvmf_tgt_poll_group_002", 00:37:27.769 "pending_bdev_io": 0, 00:37:27.769 "transports": [ 00:37:27.769 { 00:37:27.769 "trtype": "TCP" 00:37:27.769 } 00:37:27.769 ] 00:37:27.769 }, 00:37:27.769 { 00:37:27.769 "admin_qpairs": 0, 00:37:27.769 "completed_nvme_io": 0, 00:37:27.769 "current_admin_qpairs": 0, 00:37:27.769 "current_io_qpairs": 0, 00:37:27.769 "io_qpairs": 0, 00:37:27.769 "name": "nvmf_tgt_poll_group_003", 00:37:27.769 "pending_bdev_io": 0, 00:37:27.769 "transports": [ 00:37:27.769 { 00:37:27.769 "trtype": "TCP" 00:37:27.769 } 00:37:27.769 ] 00:37:27.769 } 00:37:27.769 ], 00:37:27.769 "tick_rate": 2290000000 00:37:27.769 }' 00:37:28.028 14:50:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:37:28.028 14:50:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:37:28.028 14:50:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:37:28.028 14:50:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:37:28.028 14:50:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:37:28.028 14:50:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:37:28.028 14:50:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:37:28.028 14:50:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:37:28.028 14:50:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:37:28.028 14:50:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:37:28.028 14:50:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:37:28.028 14:50:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:37:28.028 14:50:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:37:28.028 14:50:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:37:28.028 14:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:28.028 14:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:28.028 Malloc1 00:37:28.028 14:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:28.028 14:50:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:37:28.028 14:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:28.028 14:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:28.028 14:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:28.028 14:50:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:37:28.028 14:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:28.028 14:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:28.028 14:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:28.028 14:50:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:37:28.028 14:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:28.028 14:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:28.028 14:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:28.028 14:50:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:28.028 14:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:28.028 14:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:28.028 [2024-07-22 14:50:47.526198] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:28.028 14:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:28.028 14:50:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -a 10.0.0.2 -s 4420 00:37:28.028 14:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:37:28.028 14:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -a 10.0.0.2 -s 4420 00:37:28.028 14:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:37:28.028 14:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:28.028 14:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:37:28.028 14:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:28.028 14:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:37:28.028 14:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:28.028 14:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:37:28.028 14:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:37:28.028 14:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -a 10.0.0.2 -s 4420 00:37:28.028 [2024-07-22 14:50:47.562405] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5' 00:37:28.028 Failed to write to /dev/nvme-fabrics: Input/output error 00:37:28.028 could not add new controller: failed to write to nvme-fabrics device 00:37:28.028 14:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:37:28.028 14:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:28.028 14:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:28.028 14:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:28.028 14:50:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:37:28.028 14:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:28.028 14:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:28.028 14:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:28.028 14:50:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:37:28.287 14:50:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:37:28.287 14:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:37:28.288 14:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:37:28.288 14:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:37:28.288 14:50:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:37:30.193 14:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:37:30.193 14:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:37:30.193 14:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:37:30.193 14:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:37:30.193 14:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:37:30.193 14:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:37:30.193 14:50:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:37:30.193 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:30.193 14:50:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:37:30.193 14:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:37:30.193 14:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:37:30.193 14:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:30.453 14:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:37:30.453 14:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:30.453 14:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:37:30.453 14:50:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:37:30.453 14:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:30.453 14:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:30.453 14:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:30.453 14:50:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:37:30.453 14:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:37:30.453 14:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:37:30.453 14:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:37:30.453 14:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:30.453 14:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:37:30.453 14:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:30.453 14:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:37:30.453 14:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:30.453 14:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:37:30.453 14:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:37:30.453 14:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:37:30.453 [2024-07-22 14:50:49.889104] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5' 00:37:30.453 Failed to write to /dev/nvme-fabrics: Input/output error 00:37:30.453 could not add new controller: failed to write to nvme-fabrics device 00:37:30.453 14:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:37:30.453 14:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:30.453 14:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:30.453 14:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:30.453 14:50:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:37:30.453 14:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:30.453 14:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:30.453 14:50:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:30.453 14:50:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:37:30.453 14:50:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:37:30.453 14:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:37:30.453 14:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:37:30.453 14:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:37:30.453 14:50:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:37:32.989 14:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:37:32.989 14:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:37:32.989 14:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:37:32.989 14:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:37:32.989 14:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:37:32.989 14:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:37:32.989 14:50:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:37:32.989 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:32.989 14:50:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:37:32.989 14:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:37:32.989 14:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:37:32.989 14:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:32.989 14:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:37:32.989 14:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:32.989 14:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:37:32.989 14:50:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:32.989 14:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:32.989 14:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:32.989 14:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:32.989 14:50:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:37:32.989 14:50:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:37:32.989 14:50:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:37:32.989 14:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:32.989 14:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:32.989 14:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:32.989 14:50:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:32.989 14:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:32.989 14:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:32.989 [2024-07-22 14:50:52.216171] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:32.989 14:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:32.989 14:50:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:37:32.989 14:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:32.989 14:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:32.989 14:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:32.989 14:50:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:37:32.989 14:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:32.989 14:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:32.989 14:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:32.989 14:50:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:37:32.989 14:50:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:37:32.989 14:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:37:32.989 14:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:37:32.989 14:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:37:32.989 14:50:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:37:34.895 14:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:37:34.895 14:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:37:34.895 14:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:37:34.895 14:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:37:34.895 14:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:37:34.895 14:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:37:34.895 14:50:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:37:34.895 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:34.895 14:50:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:37:34.895 14:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:37:34.895 14:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:37:34.895 14:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:34.895 14:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:34.895 14:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:37:34.895 14:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:37:34.895 14:50:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:34.895 14:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.895 14:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:34.895 14:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.895 14:50:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:34.895 14:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.895 14:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:34.895 14:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:34.895 14:50:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:37:34.895 14:50:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:37:34.895 14:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:34.895 14:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:35.155 14:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:35.155 14:50:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:35.155 14:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:35.155 14:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:35.155 [2024-07-22 14:50:54.534856] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:35.155 14:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:35.155 14:50:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:37:35.155 14:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:35.155 14:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:35.155 14:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:35.155 14:50:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:37:35.155 14:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:35.155 14:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:35.155 14:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:35.155 14:50:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:37:35.155 14:50:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:37:35.155 14:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:37:35.155 14:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:37:35.155 14:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:37:35.155 14:50:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:37:37.694 14:50:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:37:37.694 14:50:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:37:37.695 14:50:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:37:37.695 14:50:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:37:37.695 14:50:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:37:37.695 14:50:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:37:37.695 14:50:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:37:37.695 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:37.695 14:50:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:37:37.695 14:50:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:37:37.695 14:50:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:37:37.695 14:50:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:37.695 14:50:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:37:37.695 14:50:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:37.695 14:50:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:37:37.695 14:50:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:37.695 14:50:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:37.695 14:50:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:37.695 14:50:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:37.695 14:50:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:37.695 14:50:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:37.695 14:50:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:37.695 14:50:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:37.695 14:50:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:37:37.695 14:50:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:37:37.695 14:50:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:37.695 14:50:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:37.695 14:50:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:37.695 14:50:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:37.695 14:50:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:37.695 14:50:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:37.695 [2024-07-22 14:50:56.825723] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:37.695 14:50:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:37.695 14:50:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:37:37.695 14:50:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:37.695 14:50:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:37.695 14:50:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:37.695 14:50:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:37:37.695 14:50:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:37.695 14:50:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:37.695 14:50:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:37.695 14:50:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:37:37.695 14:50:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:37:37.695 14:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:37:37.695 14:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:37:37.695 14:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:37:37.695 14:50:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:37:39.602 14:50:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:37:39.602 14:50:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:37:39.602 14:50:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:37:39.602 14:50:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:37:39.602 14:50:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:37:39.602 14:50:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:37:39.602 14:50:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:37:39.602 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:39.602 14:50:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:37:39.602 14:50:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:37:39.602 14:50:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:37:39.602 14:50:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:39.602 14:50:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:37:39.602 14:50:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:39.602 14:50:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:37:39.602 14:50:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:39.602 14:50:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:39.602 14:50:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:39.602 14:50:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:39.602 14:50:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:39.602 14:50:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:39.602 14:50:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:39.602 14:50:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:39.602 14:50:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:37:39.602 14:50:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:37:39.602 14:50:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:39.602 14:50:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:39.602 14:50:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:39.602 14:50:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:39.602 14:50:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:39.602 14:50:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:39.602 [2024-07-22 14:50:59.152363] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:39.602 14:50:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:39.602 14:50:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:37:39.602 14:50:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:39.602 14:50:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:39.602 14:50:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:39.602 14:50:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:37:39.602 14:50:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:39.602 14:50:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:39.602 14:50:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:39.602 14:50:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:37:39.861 14:50:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:37:39.861 14:50:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:37:39.861 14:50:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:37:39.861 14:50:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:37:39.861 14:50:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:37:41.768 14:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:37:41.768 14:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:37:41.768 14:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:37:41.768 14:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:37:41.768 14:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:37:41.768 14:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:37:41.768 14:51:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:37:42.028 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:42.028 14:51:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:37:42.028 14:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:37:42.028 14:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:42.028 14:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:37:42.028 14:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:42.028 14:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:37:42.028 14:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:37:42.028 14:51:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:42.028 14:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:42.028 14:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:42.028 14:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:42.028 14:51:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:42.028 14:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:42.028 14:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:42.028 14:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:42.028 14:51:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:37:42.028 14:51:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:37:42.028 14:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:42.028 14:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:42.028 14:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:42.028 14:51:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:42.028 14:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:42.028 14:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:42.028 [2024-07-22 14:51:01.475146] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:42.028 14:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:42.028 14:51:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:37:42.028 14:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:42.028 14:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:42.028 14:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:42.028 14:51:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:37:42.028 14:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:42.028 14:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:42.028 14:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:42.028 14:51:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:37:42.286 14:51:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:37:42.286 14:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:37:42.286 14:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:37:42.286 14:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:37:42.286 14:51:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:37:44.190 14:51:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:37:44.190 14:51:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:37:44.190 14:51:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:37:44.190 14:51:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:37:44.190 14:51:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:37:44.190 14:51:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:37:44.190 14:51:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:37:44.449 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:37:44.449 14:51:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:37:44.449 14:51:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:37:44.449 14:51:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:37:44.449 14:51:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:44.449 14:51:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:37:44.449 14:51:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:44.449 14:51:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:37:44.449 14:51:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:44.449 14:51:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.449 14:51:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:44.449 14:51:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.449 14:51:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:44.449 14:51:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.449 14:51:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:44.449 14:51:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.449 14:51:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:37:44.449 14:51:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:37:44.449 14:51:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:37:44.449 14:51:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.449 14:51:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:44.449 14:51:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.449 14:51:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:44.449 14:51:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.449 14:51:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:44.449 [2024-07-22 14:51:03.925640] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:44.449 14:51:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.449 14:51:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:37:44.449 14:51:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.449 14:51:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:44.449 14:51:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.449 14:51:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:37:44.449 14:51:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.449 14:51:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:44.449 14:51:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.449 14:51:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:44.449 14:51:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.449 14:51:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:44.449 14:51:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.449 14:51:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:44.449 14:51:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.449 14:51:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:44.449 14:51:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.449 14:51:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:37:44.449 14:51:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:37:44.449 14:51:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.449 14:51:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:44.449 14:51:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.450 14:51:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:44.450 14:51:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.450 14:51:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:44.450 [2024-07-22 14:51:03.997622] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:44.450 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.450 14:51:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:37:44.450 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.450 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:44.450 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.450 14:51:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:37:44.450 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.450 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:44.450 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.450 14:51:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:44.450 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.450 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:44.450 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.450 14:51:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:44.450 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.450 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:44.450 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.450 14:51:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:37:44.450 14:51:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:37:44.450 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.450 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:44.450 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.450 14:51:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:44.450 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.450 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:44.450 [2024-07-22 14:51:04.069521] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:44.450 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.450 14:51:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:37:44.450 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.450 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:44.708 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.708 14:51:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:37:44.708 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.708 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:44.708 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.708 14:51:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:44.708 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.708 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:44.708 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.708 14:51:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:44.708 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.708 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:44.708 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.708 14:51:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:44.709 [2024-07-22 14:51:04.141427] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:44.709 [2024-07-22 14:51:04.213369] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:37:44.709 "poll_groups": [ 00:37:44.709 { 00:37:44.709 "admin_qpairs": 2, 00:37:44.709 "completed_nvme_io": 66, 00:37:44.709 "current_admin_qpairs": 0, 00:37:44.709 "current_io_qpairs": 0, 00:37:44.709 "io_qpairs": 16, 00:37:44.709 "name": "nvmf_tgt_poll_group_000", 00:37:44.709 "pending_bdev_io": 0, 00:37:44.709 "transports": [ 00:37:44.709 { 00:37:44.709 "trtype": "TCP" 00:37:44.709 } 00:37:44.709 ] 00:37:44.709 }, 00:37:44.709 { 00:37:44.709 "admin_qpairs": 3, 00:37:44.709 "completed_nvme_io": 118, 00:37:44.709 "current_admin_qpairs": 0, 00:37:44.709 "current_io_qpairs": 0, 00:37:44.709 "io_qpairs": 17, 00:37:44.709 "name": "nvmf_tgt_poll_group_001", 00:37:44.709 "pending_bdev_io": 0, 00:37:44.709 "transports": [ 00:37:44.709 { 00:37:44.709 "trtype": "TCP" 00:37:44.709 } 00:37:44.709 ] 00:37:44.709 }, 00:37:44.709 { 00:37:44.709 "admin_qpairs": 1, 00:37:44.709 "completed_nvme_io": 120, 00:37:44.709 "current_admin_qpairs": 0, 00:37:44.709 "current_io_qpairs": 0, 00:37:44.709 "io_qpairs": 19, 00:37:44.709 "name": "nvmf_tgt_poll_group_002", 00:37:44.709 "pending_bdev_io": 0, 00:37:44.709 "transports": [ 00:37:44.709 { 00:37:44.709 "trtype": "TCP" 00:37:44.709 } 00:37:44.709 ] 00:37:44.709 }, 00:37:44.709 { 00:37:44.709 "admin_qpairs": 1, 00:37:44.709 "completed_nvme_io": 116, 00:37:44.709 "current_admin_qpairs": 0, 00:37:44.709 "current_io_qpairs": 0, 00:37:44.709 "io_qpairs": 18, 00:37:44.709 "name": "nvmf_tgt_poll_group_003", 00:37:44.709 "pending_bdev_io": 0, 00:37:44.709 "transports": [ 00:37:44.709 { 00:37:44.709 "trtype": "TCP" 00:37:44.709 } 00:37:44.709 ] 00:37:44.709 } 00:37:44.709 ], 00:37:44.709 "tick_rate": 2290000000 00:37:44.709 }' 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:37:44.709 14:51:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:37:44.968 14:51:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:37:44.969 14:51:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:37:44.969 14:51:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:37:44.969 14:51:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:37:44.969 14:51:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:37:44.969 14:51:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:37:44.969 14:51:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:44.969 14:51:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:37:44.969 14:51:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:44.969 14:51:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:37:44.969 14:51:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:44.969 14:51:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:44.969 rmmod nvme_tcp 00:37:44.969 rmmod nvme_fabrics 00:37:44.969 rmmod nvme_keyring 00:37:44.969 14:51:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:44.969 14:51:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:37:44.969 14:51:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:37:44.969 14:51:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 82731 ']' 00:37:44.969 14:51:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 82731 00:37:44.969 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@946 -- # '[' -z 82731 ']' 00:37:44.969 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@950 -- # kill -0 82731 00:37:44.969 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # uname 00:37:44.969 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:44.969 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 82731 00:37:44.969 killing process with pid 82731 00:37:44.969 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:37:44.969 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:37:44.969 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 82731' 00:37:44.969 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # kill 82731 00:37:44.969 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@970 -- # wait 82731 00:37:45.228 14:51:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:45.228 14:51:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:45.228 14:51:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:45.228 14:51:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:45.228 14:51:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:45.228 14:51:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:45.228 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:45.228 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:45.228 14:51:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:37:45.228 00:37:45.228 real 0m19.107s 00:37:45.228 user 1m12.550s 00:37:45.228 sys 0m1.971s 00:37:45.228 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:45.228 14:51:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:37:45.228 ************************************ 00:37:45.228 END TEST nvmf_rpc 00:37:45.228 ************************************ 00:37:45.228 14:51:04 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:37:45.228 14:51:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:37:45.228 14:51:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:45.228 14:51:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:45.228 ************************************ 00:37:45.228 START TEST nvmf_invalid 00:37:45.228 ************************************ 00:37:45.228 14:51:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:37:45.487 * Looking for test storage... 00:37:45.487 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:37:45.488 14:51:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:37:45.488 14:51:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:37:45.488 14:51:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:45.488 14:51:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:45.488 14:51:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:45.488 14:51:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:45.488 14:51:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:45.488 14:51:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:45.488 14:51:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:45.488 14:51:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:45.488 14:51:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:45.488 14:51:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:45.488 14:51:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:37:45.488 14:51:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:37:45.488 14:51:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:45.488 14:51:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:45.488 14:51:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:37:45.488 14:51:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:45.488 14:51:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:45.488 14:51:04 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:45.488 14:51:04 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:45.488 14:51:04 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:45.488 14:51:04 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:45.488 14:51:04 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:45.488 14:51:04 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:45.488 14:51:04 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:37:45.488 14:51:04 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:45.488 14:51:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:37:45.488 14:51:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:45.488 14:51:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:45.488 14:51:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:45.488 14:51:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:45.488 14:51:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:45.488 14:51:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:45.488 14:51:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:45.488 14:51:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:45.488 14:51:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:37:45.488 14:51:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:45.488 14:51:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:37:45.488 14:51:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:37:45.488 14:51:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:37:45.488 14:51:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:37:45.488 14:51:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:45.488 14:51:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:45.488 14:51:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:45.488 14:51:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:45.488 14:51:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:45.488 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:45.488 14:51:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:45.488 14:51:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:45.488 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:37:45.488 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:37:45.488 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:37:45.488 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:37:45.488 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:37:45.488 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@432 -- # nvmf_veth_init 00:37:45.488 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:45.488 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:45.488 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:37:45.488 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:37:45.488 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:37:45.488 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:37:45.488 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:37:45.488 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:45.488 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:37:45.488 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:37:45.488 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:37:45.488 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:37:45.488 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:37:45.488 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:37:45.488 Cannot find device "nvmf_tgt_br" 00:37:45.488 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # true 00:37:45.488 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:37:45.488 Cannot find device "nvmf_tgt_br2" 00:37:45.488 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # true 00:37:45.488 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:37:45.488 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:37:45.488 Cannot find device "nvmf_tgt_br" 00:37:45.488 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # true 00:37:45.488 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:37:45.488 Cannot find device "nvmf_tgt_br2" 00:37:45.488 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # true 00:37:45.488 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:37:45.747 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:37:45.747 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:37:45.747 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:45.747 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:37:45.747 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:37:45.747 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:45.747 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:37:45.747 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:37:45.747 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:37:45.747 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:37:45.747 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:37:45.747 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:37:45.747 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:37:45.747 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:37:45.748 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:37:45.748 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:37:45.748 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:37:45.748 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:37:45.748 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:37:45.748 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:37:45.748 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:37:45.748 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:37:45.748 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:37:45.748 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:37:45.748 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:37:45.748 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:37:45.748 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:37:45.748 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:37:45.748 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:37:45.748 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:37:45.748 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:37:45.748 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:45.748 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:37:45.748 00:37:45.748 --- 10.0.0.2 ping statistics --- 00:37:45.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:45.748 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:37:45.748 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:37:45.748 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:37:45.748 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:37:45.748 00:37:45.748 --- 10.0.0.3 ping statistics --- 00:37:45.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:45.748 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:37:45.748 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:37:45.748 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:45.748 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:37:45.748 00:37:45.748 --- 10.0.0.1 ping statistics --- 00:37:45.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:45.748 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:37:45.748 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:45.748 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@433 -- # return 0 00:37:45.748 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:45.748 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:45.748 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:45.748 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:45.748 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:45.748 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:45.748 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:45.748 14:51:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:37:45.748 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:45.748 14:51:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@720 -- # xtrace_disable 00:37:45.748 14:51:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:37:45.748 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=83237 00:37:45.748 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:37:45.748 14:51:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 83237 00:37:45.748 14:51:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@827 -- # '[' -z 83237 ']' 00:37:45.748 14:51:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:45.748 14:51:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:45.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:45.748 14:51:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:45.748 14:51:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:45.748 14:51:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:37:46.007 [2024-07-22 14:51:05.422922] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:37:46.007 [2024-07-22 14:51:05.422979] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:46.007 [2024-07-22 14:51:05.560816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:46.007 [2024-07-22 14:51:05.608528] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:46.007 [2024-07-22 14:51:05.608574] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:46.007 [2024-07-22 14:51:05.608580] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:46.007 [2024-07-22 14:51:05.608584] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:46.007 [2024-07-22 14:51:05.608588] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:46.007 [2024-07-22 14:51:05.608810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:46.007 [2024-07-22 14:51:05.609016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:37:46.007 [2024-07-22 14:51:05.609199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:46.007 [2024-07-22 14:51:05.609204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:37:46.945 14:51:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:46.945 14:51:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@860 -- # return 0 00:37:46.945 14:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:46.945 14:51:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:46.945 14:51:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:37:46.945 14:51:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:46.945 14:51:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:37:46.945 14:51:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode21489 00:37:46.945 [2024-07-22 14:51:06.498255] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:37:46.945 14:51:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/07/22 14:51:06 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode21489 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:37:46.945 request: 00:37:46.945 { 00:37:46.945 "method": "nvmf_create_subsystem", 00:37:46.945 "params": { 00:37:46.945 "nqn": "nqn.2016-06.io.spdk:cnode21489", 00:37:46.945 "tgt_name": "foobar" 00:37:46.945 } 00:37:46.945 } 00:37:46.945 Got JSON-RPC error response 00:37:46.945 GoRPCClient: error on JSON-RPC call' 00:37:46.945 14:51:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/07/22 14:51:06 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode21489 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:37:46.945 request: 00:37:46.945 { 00:37:46.945 "method": "nvmf_create_subsystem", 00:37:46.945 "params": { 00:37:46.945 "nqn": "nqn.2016-06.io.spdk:cnode21489", 00:37:46.945 "tgt_name": "foobar" 00:37:46.945 } 00:37:46.945 } 00:37:46.945 Got JSON-RPC error response 00:37:46.945 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:37:46.945 14:51:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:37:46.945 14:51:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode2997 00:37:47.204 [2024-07-22 14:51:06.710074] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2997: invalid serial number 'SPDKISFASTANDAWESOME' 00:37:47.204 14:51:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/07/22 14:51:06 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode2997 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:37:47.204 request: 00:37:47.204 { 00:37:47.204 "method": "nvmf_create_subsystem", 00:37:47.204 "params": { 00:37:47.204 "nqn": "nqn.2016-06.io.spdk:cnode2997", 00:37:47.204 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:37:47.204 } 00:37:47.204 } 00:37:47.204 Got JSON-RPC error response 00:37:47.204 GoRPCClient: error on JSON-RPC call' 00:37:47.205 14:51:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/07/22 14:51:06 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode2997 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:37:47.205 request: 00:37:47.205 { 00:37:47.205 "method": "nvmf_create_subsystem", 00:37:47.205 "params": { 00:37:47.205 "nqn": "nqn.2016-06.io.spdk:cnode2997", 00:37:47.205 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:37:47.205 } 00:37:47.205 } 00:37:47.205 Got JSON-RPC error response 00:37:47.205 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:37:47.205 14:51:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:37:47.205 14:51:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode30962 00:37:47.464 [2024-07-22 14:51:06.973839] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30962: invalid model number 'SPDK_Controller' 00:37:47.464 14:51:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/07/22 14:51:06 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode30962], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:37:47.464 request: 00:37:47.464 { 00:37:47.464 "method": "nvmf_create_subsystem", 00:37:47.464 "params": { 00:37:47.464 "nqn": "nqn.2016-06.io.spdk:cnode30962", 00:37:47.464 "model_number": "SPDK_Controller\u001f" 00:37:47.464 } 00:37:47.464 } 00:37:47.464 Got JSON-RPC error response 00:37:47.464 GoRPCClient: error on JSON-RPC call' 00:37:47.464 14:51:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/07/22 14:51:06 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode30962], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:37:47.464 request: 00:37:47.464 { 00:37:47.464 "method": "nvmf_create_subsystem", 00:37:47.464 "params": { 00:37:47.464 "nqn": "nqn.2016-06.io.spdk:cnode30962", 00:37:47.464 "model_number": "SPDK_Controller\u001f" 00:37:47.464 } 00:37:47.464 } 00:37:47.464 Got JSON-RPC error response 00:37:47.464 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:37:47.464 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:37:47.730 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:37:47.730 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:37:47.730 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:37:47.730 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:37:47.730 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:37:47.730 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:37:47.730 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:37:47.730 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:37:47.730 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:37:47.730 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:37:47.730 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:37:47.730 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:37:47.730 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:37:47.730 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:37:47.730 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:37:47.730 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:37:47.730 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:37:47.730 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:37:47.730 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:37:47.730 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:37:47.730 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:37:47.730 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:37:47.730 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:37:47.730 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:37:47.730 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:37:47.730 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:37:47.730 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:37:47.730 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:37:47.730 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:37:47.730 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:37:47.730 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:37:47.730 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:37:47.730 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:37:47.730 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:37:47.730 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:37:47.730 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:37:47.730 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:37:47.730 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:37:47.730 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:37:47.730 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:37:47.730 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:37:47.730 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:37:47.730 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:37:47.730 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:37:47.730 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:37:47.730 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:37:47.730 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:37:47.730 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:37:47.730 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:37:47.730 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[  == \- ]] 00:37:47.730 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '.+=s1Z.hrR}SO{a4_gek&B=^MS"U5eW*\R '\''7^%-&AIgp(F$&' 00:37:48.279 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d 'N'\''>{a4_gek&B=^MS"U5eW*\R '\''7^%-&AIgp(F$&' nqn.2016-06.io.spdk:cnode26549 00:37:48.279 [2024-07-22 14:51:07.889059] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26549: invalid model number 'N'>{a4_gek&B=^MS"U5eW*\R '7^%-&AIgp(F$&' 00:37:48.538 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='2024/07/22 14:51:07 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:N'\''>{a4_gek&B=^MS"U5eW*\R '\''7^%-&AIgp(F$& nqn:nqn.2016-06.io.spdk:cnode26549], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN N'\''>{a4_gek&B=^MS"U5eW*\R '\''7^%-&AIgp(F$& 00:37:48.538 request: 00:37:48.538 { 00:37:48.538 "method": "nvmf_create_subsystem", 00:37:48.538 "params": { 00:37:48.538 "nqn": "nqn.2016-06.io.spdk:cnode26549", 00:37:48.538 "model_number": "N'\''>{a4_\u007fgek&B=^MS\"U5eW*\\\u007fR '\''7^%-&AIgp(F$&" 00:37:48.538 } 00:37:48.538 } 00:37:48.538 Got JSON-RPC error response 00:37:48.538 GoRPCClient: error on JSON-RPC call' 00:37:48.538 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ 2024/07/22 14:51:07 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:N'>{a4_gek&B=^MS"U5eW*\R '7^%-&AIgp(F$& nqn:nqn.2016-06.io.spdk:cnode26549], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN N'>{a4_gek&B=^MS"U5eW*\R '7^%-&AIgp(F$& 00:37:48.538 request: 00:37:48.538 { 00:37:48.538 "method": "nvmf_create_subsystem", 00:37:48.538 "params": { 00:37:48.538 "nqn": "nqn.2016-06.io.spdk:cnode26549", 00:37:48.538 "model_number": "N'>{a4_\u007fgek&B=^MS\"U5eW*\\\u007fR '7^%-&AIgp(F$&" 00:37:48.538 } 00:37:48.538 } 00:37:48.538 Got JSON-RPC error response 00:37:48.538 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:37:48.538 14:51:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:37:48.538 [2024-07-22 14:51:08.080950] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:48.538 14:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:37:48.798 14:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:37:48.798 14:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:37:48.798 14:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:37:48.798 14:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:37:48.798 14:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:37:49.058 [2024-07-22 14:51:08.482016] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:37:49.058 14:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='2024/07/22 14:51:08 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:37:49.058 request: 00:37:49.058 { 00:37:49.058 "method": "nvmf_subsystem_remove_listener", 00:37:49.058 "params": { 00:37:49.058 "nqn": "nqn.2016-06.io.spdk:cnode", 00:37:49.058 "listen_address": { 00:37:49.058 "trtype": "tcp", 00:37:49.058 "traddr": "", 00:37:49.058 "trsvcid": "4421" 00:37:49.058 } 00:37:49.058 } 00:37:49.058 } 00:37:49.058 Got JSON-RPC error response 00:37:49.058 GoRPCClient: error on JSON-RPC call' 00:37:49.058 14:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ 2024/07/22 14:51:08 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:37:49.058 request: 00:37:49.058 { 00:37:49.058 "method": "nvmf_subsystem_remove_listener", 00:37:49.058 "params": { 00:37:49.058 "nqn": "nqn.2016-06.io.spdk:cnode", 00:37:49.058 "listen_address": { 00:37:49.058 "trtype": "tcp", 00:37:49.058 "traddr": "", 00:37:49.058 "trsvcid": "4421" 00:37:49.058 } 00:37:49.058 } 00:37:49.058 } 00:37:49.058 Got JSON-RPC error response 00:37:49.058 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:37:49.058 14:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14292 -i 0 00:37:49.058 [2024-07-22 14:51:08.685806] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14292: invalid cntlid range [0-65519] 00:37:49.318 14:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='2024/07/22 14:51:08 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode14292], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:37:49.318 request: 00:37:49.318 { 00:37:49.318 "method": "nvmf_create_subsystem", 00:37:49.318 "params": { 00:37:49.318 "nqn": "nqn.2016-06.io.spdk:cnode14292", 00:37:49.318 "min_cntlid": 0 00:37:49.318 } 00:37:49.318 } 00:37:49.318 Got JSON-RPC error response 00:37:49.318 GoRPCClient: error on JSON-RPC call' 00:37:49.318 14:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ 2024/07/22 14:51:08 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode14292], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:37:49.318 request: 00:37:49.318 { 00:37:49.318 "method": "nvmf_create_subsystem", 00:37:49.318 "params": { 00:37:49.318 "nqn": "nqn.2016-06.io.spdk:cnode14292", 00:37:49.318 "min_cntlid": 0 00:37:49.318 } 00:37:49.318 } 00:37:49.318 Got JSON-RPC error response 00:37:49.318 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:37:49.318 14:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10611 -i 65520 00:37:49.318 [2024-07-22 14:51:08.889594] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10611: invalid cntlid range [65520-65519] 00:37:49.318 14:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='2024/07/22 14:51:08 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode10611], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:37:49.318 request: 00:37:49.318 { 00:37:49.318 "method": "nvmf_create_subsystem", 00:37:49.318 "params": { 00:37:49.318 "nqn": "nqn.2016-06.io.spdk:cnode10611", 00:37:49.319 "min_cntlid": 65520 00:37:49.319 } 00:37:49.319 } 00:37:49.319 Got JSON-RPC error response 00:37:49.319 GoRPCClient: error on JSON-RPC call' 00:37:49.319 14:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ 2024/07/22 14:51:08 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode10611], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:37:49.319 request: 00:37:49.319 { 00:37:49.319 "method": "nvmf_create_subsystem", 00:37:49.319 "params": { 00:37:49.319 "nqn": "nqn.2016-06.io.spdk:cnode10611", 00:37:49.319 "min_cntlid": 65520 00:37:49.319 } 00:37:49.319 } 00:37:49.319 Got JSON-RPC error response 00:37:49.319 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:37:49.319 14:51:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16315 -I 0 00:37:49.579 [2024-07-22 14:51:09.097464] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16315: invalid cntlid range [1-0] 00:37:49.579 14:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='2024/07/22 14:51:09 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode16315], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:37:49.579 request: 00:37:49.579 { 00:37:49.579 "method": "nvmf_create_subsystem", 00:37:49.579 "params": { 00:37:49.579 "nqn": "nqn.2016-06.io.spdk:cnode16315", 00:37:49.579 "max_cntlid": 0 00:37:49.579 } 00:37:49.579 } 00:37:49.579 Got JSON-RPC error response 00:37:49.579 GoRPCClient: error on JSON-RPC call' 00:37:49.579 14:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ 2024/07/22 14:51:09 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode16315], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:37:49.579 request: 00:37:49.579 { 00:37:49.579 "method": "nvmf_create_subsystem", 00:37:49.579 "params": { 00:37:49.579 "nqn": "nqn.2016-06.io.spdk:cnode16315", 00:37:49.579 "max_cntlid": 0 00:37:49.579 } 00:37:49.579 } 00:37:49.579 Got JSON-RPC error response 00:37:49.579 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:37:49.579 14:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1135 -I 65520 00:37:49.839 [2024-07-22 14:51:09.293256] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1135: invalid cntlid range [1-65520] 00:37:49.839 14:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='2024/07/22 14:51:09 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode1135], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:37:49.839 request: 00:37:49.839 { 00:37:49.839 "method": "nvmf_create_subsystem", 00:37:49.839 "params": { 00:37:49.839 "nqn": "nqn.2016-06.io.spdk:cnode1135", 00:37:49.839 "max_cntlid": 65520 00:37:49.839 } 00:37:49.839 } 00:37:49.839 Got JSON-RPC error response 00:37:49.839 GoRPCClient: error on JSON-RPC call' 00:37:49.839 14:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ 2024/07/22 14:51:09 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode1135], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:37:49.839 request: 00:37:49.839 { 00:37:49.839 "method": "nvmf_create_subsystem", 00:37:49.839 "params": { 00:37:49.839 "nqn": "nqn.2016-06.io.spdk:cnode1135", 00:37:49.839 "max_cntlid": 65520 00:37:49.839 } 00:37:49.839 } 00:37:49.839 Got JSON-RPC error response 00:37:49.839 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:37:49.839 14:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9967 -i 6 -I 5 00:37:50.098 [2024-07-22 14:51:09.485058] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9967: invalid cntlid range [6-5] 00:37:50.098 14:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='2024/07/22 14:51:09 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode9967], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:37:50.098 request: 00:37:50.098 { 00:37:50.098 "method": "nvmf_create_subsystem", 00:37:50.098 "params": { 00:37:50.098 "nqn": "nqn.2016-06.io.spdk:cnode9967", 00:37:50.098 "min_cntlid": 6, 00:37:50.098 "max_cntlid": 5 00:37:50.098 } 00:37:50.098 } 00:37:50.098 Got JSON-RPC error response 00:37:50.099 GoRPCClient: error on JSON-RPC call' 00:37:50.099 14:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ 2024/07/22 14:51:09 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode9967], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:37:50.099 request: 00:37:50.099 { 00:37:50.099 "method": "nvmf_create_subsystem", 00:37:50.099 "params": { 00:37:50.099 "nqn": "nqn.2016-06.io.spdk:cnode9967", 00:37:50.099 "min_cntlid": 6, 00:37:50.099 "max_cntlid": 5 00:37:50.099 } 00:37:50.099 } 00:37:50.099 Got JSON-RPC error response 00:37:50.099 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:37:50.099 14:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:37:50.099 14:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:37:50.099 { 00:37:50.099 "name": "foobar", 00:37:50.099 "method": "nvmf_delete_target", 00:37:50.099 "req_id": 1 00:37:50.099 } 00:37:50.099 Got JSON-RPC error response 00:37:50.099 response: 00:37:50.099 { 00:37:50.099 "code": -32602, 00:37:50.099 "message": "The specified target doesn'\''t exist, cannot delete it." 00:37:50.099 }' 00:37:50.099 14:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:37:50.099 { 00:37:50.099 "name": "foobar", 00:37:50.099 "method": "nvmf_delete_target", 00:37:50.099 "req_id": 1 00:37:50.099 } 00:37:50.099 Got JSON-RPC error response 00:37:50.099 response: 00:37:50.099 { 00:37:50.099 "code": -32602, 00:37:50.099 "message": "The specified target doesn't exist, cannot delete it." 00:37:50.099 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:37:50.099 14:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:37:50.099 14:51:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:37:50.099 14:51:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:50.099 14:51:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:37:50.099 14:51:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:50.099 14:51:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:37:50.099 14:51:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:50.099 14:51:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:50.099 rmmod nvme_tcp 00:37:50.099 rmmod nvme_fabrics 00:37:50.099 rmmod nvme_keyring 00:37:50.358 14:51:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:50.359 14:51:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:37:50.359 14:51:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:37:50.359 14:51:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 83237 ']' 00:37:50.359 14:51:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 83237 00:37:50.359 14:51:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@946 -- # '[' -z 83237 ']' 00:37:50.359 14:51:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@950 -- # kill -0 83237 00:37:50.359 14:51:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # uname 00:37:50.359 14:51:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:50.359 14:51:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 83237 00:37:50.359 killing process with pid 83237 00:37:50.359 14:51:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:37:50.359 14:51:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:37:50.359 14:51:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@964 -- # echo 'killing process with pid 83237' 00:37:50.359 14:51:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@965 -- # kill 83237 00:37:50.359 14:51:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@970 -- # wait 83237 00:37:50.359 14:51:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:50.359 14:51:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:50.359 14:51:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:50.359 14:51:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:50.359 14:51:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:50.359 14:51:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:50.359 14:51:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:50.359 14:51:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:50.619 14:51:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:37:50.619 00:37:50.619 real 0m5.178s 00:37:50.619 user 0m20.016s 00:37:50.619 sys 0m1.432s 00:37:50.619 14:51:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:50.619 14:51:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:37:50.619 ************************************ 00:37:50.619 END TEST nvmf_invalid 00:37:50.619 ************************************ 00:37:50.619 14:51:10 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:37:50.619 14:51:10 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:37:50.619 14:51:10 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:50.619 14:51:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:50.619 ************************************ 00:37:50.619 START TEST nvmf_abort 00:37:50.619 ************************************ 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:37:50.619 * Looking for test storage... 00:37:50.619 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@432 -- # nvmf_veth_init 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:37:50.619 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:37:50.880 Cannot find device "nvmf_tgt_br" 00:37:50.880 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # true 00:37:50.880 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:37:50.880 Cannot find device "nvmf_tgt_br2" 00:37:50.880 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # true 00:37:50.880 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:37:50.880 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:37:50.880 Cannot find device "nvmf_tgt_br" 00:37:50.880 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # true 00:37:50.880 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:37:50.880 Cannot find device "nvmf_tgt_br2" 00:37:50.880 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # true 00:37:50.880 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:37:50.880 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:37:50.880 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:37:50.880 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:50.880 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # true 00:37:50.880 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:37:50.880 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:50.880 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # true 00:37:50.880 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:37:50.880 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:37:50.880 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:37:50.880 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:37:50.880 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:37:50.880 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:37:50.880 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:37:50.880 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:37:50.880 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:37:50.880 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:37:50.880 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:37:50.880 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:37:50.880 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:37:50.880 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:37:50.880 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:37:50.880 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:37:50.880 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:37:50.880 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:37:50.880 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:37:50.880 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:37:50.880 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:37:51.140 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:37:51.140 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:37:51.140 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:37:51.140 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:51.140 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:37:51.140 00:37:51.140 --- 10.0.0.2 ping statistics --- 00:37:51.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:51.140 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:37:51.140 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:37:51.140 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:37:51.140 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:37:51.140 00:37:51.140 --- 10.0.0.3 ping statistics --- 00:37:51.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:51.140 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:37:51.140 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:37:51.140 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:51.140 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:37:51.140 00:37:51.140 --- 10.0.0.1 ping statistics --- 00:37:51.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:51.140 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:37:51.140 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:51.140 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@433 -- # return 0 00:37:51.140 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:51.140 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:51.140 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:51.140 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:51.140 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:51.140 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:51.140 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:51.140 14:51:10 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:37:51.140 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:51.140 14:51:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@720 -- # xtrace_disable 00:37:51.140 14:51:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:51.140 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=83738 00:37:51.140 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:51.140 14:51:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 83738 00:37:51.140 14:51:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@827 -- # '[' -z 83738 ']' 00:37:51.140 14:51:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:51.140 14:51:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:51.140 14:51:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:51.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:51.140 14:51:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:51.140 14:51:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:51.140 [2024-07-22 14:51:10.604433] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:37:51.140 [2024-07-22 14:51:10.604498] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:51.140 [2024-07-22 14:51:10.742836] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:51.400 [2024-07-22 14:51:10.789281] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:51.400 [2024-07-22 14:51:10.789328] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:51.400 [2024-07-22 14:51:10.789334] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:51.400 [2024-07-22 14:51:10.789339] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:51.400 [2024-07-22 14:51:10.789343] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:51.400 [2024-07-22 14:51:10.789442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:37:51.400 [2024-07-22 14:51:10.789659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:51.400 [2024-07-22 14:51:10.789657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:37:51.998 14:51:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:51.998 14:51:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@860 -- # return 0 00:37:51.998 14:51:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:51.998 14:51:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:51.998 14:51:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:51.998 14:51:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:51.998 14:51:11 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:37:51.998 14:51:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:51.998 14:51:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:51.998 [2024-07-22 14:51:11.499542] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:51.998 14:51:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:51.998 14:51:11 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:37:51.998 14:51:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:51.998 14:51:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:51.999 Malloc0 00:37:51.999 14:51:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:51.999 14:51:11 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:51.999 14:51:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:51.999 14:51:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:51.999 Delay0 00:37:51.999 14:51:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:51.999 14:51:11 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:51.999 14:51:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:51.999 14:51:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:51.999 14:51:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:51.999 14:51:11 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:37:51.999 14:51:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:51.999 14:51:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:51.999 14:51:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:51.999 14:51:11 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:51.999 14:51:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:51.999 14:51:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:51.999 [2024-07-22 14:51:11.592850] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:51.999 14:51:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:51.999 14:51:11 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:51.999 14:51:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:51.999 14:51:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:51.999 14:51:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:51.999 14:51:11 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:37:52.258 [2024-07-22 14:51:11.790027] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:37:54.785 Initializing NVMe Controllers 00:37:54.785 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:37:54.785 controller IO queue size 128 less than required 00:37:54.785 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:37:54.785 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:37:54.785 Initialization complete. Launching workers. 00:37:54.785 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 125, failed: 44498 00:37:54.785 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 44561, failed to submit 62 00:37:54.785 success 44502, unsuccess 59, failed 0 00:37:54.785 14:51:13 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:54.785 14:51:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:54.785 14:51:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:54.785 14:51:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:54.785 14:51:13 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:37:54.785 14:51:13 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:37:54.785 14:51:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:54.785 14:51:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:37:54.785 14:51:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:54.785 14:51:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:37:54.785 14:51:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:54.785 14:51:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:54.786 rmmod nvme_tcp 00:37:54.786 rmmod nvme_fabrics 00:37:54.786 rmmod nvme_keyring 00:37:54.786 14:51:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:54.786 14:51:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:37:54.786 14:51:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:37:54.786 14:51:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 83738 ']' 00:37:54.786 14:51:13 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 83738 00:37:54.786 14:51:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@946 -- # '[' -z 83738 ']' 00:37:54.786 14:51:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@950 -- # kill -0 83738 00:37:54.786 14:51:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # uname 00:37:54.786 14:51:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:54.786 14:51:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 83738 00:37:54.786 14:51:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:37:54.786 14:51:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:37:54.786 14:51:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 83738' 00:37:54.786 killing process with pid 83738 00:37:54.786 14:51:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # kill 83738 00:37:54.786 14:51:13 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@970 -- # wait 83738 00:37:54.786 14:51:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:54.786 14:51:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:54.786 14:51:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:54.786 14:51:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:54.786 14:51:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:54.786 14:51:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:54.786 14:51:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:54.786 14:51:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:54.786 14:51:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:37:54.786 00:37:54.786 real 0m4.162s 00:37:54.786 user 0m12.125s 00:37:54.786 sys 0m0.936s 00:37:54.786 14:51:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:54.786 14:51:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:54.786 ************************************ 00:37:54.786 END TEST nvmf_abort 00:37:54.786 ************************************ 00:37:54.786 14:51:14 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:37:54.786 14:51:14 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:37:54.786 14:51:14 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:54.786 14:51:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:54.786 ************************************ 00:37:54.786 START TEST nvmf_ns_hotplug_stress 00:37:54.786 ************************************ 00:37:54.786 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:37:54.786 * Looking for test storage... 00:37:55.067 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:37:55.067 Cannot find device "nvmf_tgt_br" 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # true 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:37:55.067 Cannot find device "nvmf_tgt_br2" 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # true 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:37:55.067 Cannot find device "nvmf_tgt_br" 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # true 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:37:55.067 Cannot find device "nvmf_tgt_br2" 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # true 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:37:55.067 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:37:55.068 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:37:55.068 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:55.068 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:37:55.068 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:37:55.068 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:55.068 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:37:55.068 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:37:55.068 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:37:55.068 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:37:55.068 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:37:55.068 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:37:55.068 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:37:55.358 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:37:55.358 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:37:55.358 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:37:55.358 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:37:55.358 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:37:55.358 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:37:55.358 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:37:55.358 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:37:55.358 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:37:55.358 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:37:55.358 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:37:55.358 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:37:55.358 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:37:55.358 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:37:55.358 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:37:55.358 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:37:55.358 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:37:55.358 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:37:55.358 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:55.358 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:37:55.358 00:37:55.358 --- 10.0.0.2 ping statistics --- 00:37:55.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:55.358 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:37:55.358 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:37:55.358 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:37:55.358 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:37:55.358 00:37:55.358 --- 10.0.0.3 ping statistics --- 00:37:55.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:55.358 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:37:55.358 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:37:55.358 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:55.358 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:37:55.358 00:37:55.358 --- 10.0.0.1 ping statistics --- 00:37:55.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:55.359 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:37:55.359 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:55.359 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@433 -- # return 0 00:37:55.359 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:55.359 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:55.359 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:55.359 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:55.359 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:55.359 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:55.359 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:55.359 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:37:55.359 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:55.359 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:37:55.359 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:55.359 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=84009 00:37:55.359 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:37:55.359 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 84009 00:37:55.359 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # '[' -z 84009 ']' 00:37:55.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:55.359 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:55.359 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:55.359 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:55.359 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:55.359 14:51:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:55.359 [2024-07-22 14:51:14.877902] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:37:55.359 [2024-07-22 14:51:14.877959] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:55.618 [2024-07-22 14:51:15.019111] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:55.618 [2024-07-22 14:51:15.064277] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:55.618 [2024-07-22 14:51:15.064322] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:55.618 [2024-07-22 14:51:15.064327] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:55.618 [2024-07-22 14:51:15.064332] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:55.618 [2024-07-22 14:51:15.064336] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:55.618 [2024-07-22 14:51:15.064548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:37:55.618 [2024-07-22 14:51:15.065012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:55.618 [2024-07-22 14:51:15.065012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:37:56.183 14:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:56.183 14:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # return 0 00:37:56.183 14:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:56.183 14:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:56.183 14:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:56.183 14:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:56.183 14:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:37:56.183 14:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:56.442 [2024-07-22 14:51:15.921850] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:56.442 14:51:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:56.700 14:51:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:56.700 [2024-07-22 14:51:16.302187] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:56.700 14:51:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:56.957 14:51:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:37:57.214 Malloc0 00:37:57.214 14:51:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:57.472 Delay0 00:37:57.472 14:51:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:57.472 14:51:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:37:57.730 NULL1 00:37:57.730 14:51:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:37:57.988 14:51:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=84137 00:37:57.988 14:51:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:37:57.988 14:51:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84137 00:37:57.988 14:51:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:59.381 Read completed with error (sct=0, sc=11) 00:37:59.381 14:51:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:59.381 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:59.381 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:59.381 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:59.381 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:59.381 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:59.381 14:51:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:37:59.381 14:51:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:37:59.640 true 00:37:59.640 14:51:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84137 00:37:59.640 14:51:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:00.574 14:51:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:00.574 14:51:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:38:00.574 14:51:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:38:00.833 true 00:38:00.833 14:51:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84137 00:38:00.833 14:51:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:00.833 14:51:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:01.093 14:51:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:38:01.093 14:51:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:38:01.351 true 00:38:01.351 14:51:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84137 00:38:01.351 14:51:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:02.291 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:02.291 14:51:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:02.550 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:02.550 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:02.550 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:02.550 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:02.550 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:02.550 14:51:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:38:02.550 14:51:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:38:02.809 true 00:38:02.809 14:51:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84137 00:38:02.809 14:51:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:03.746 14:51:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:03.746 14:51:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:38:03.746 14:51:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:38:04.004 true 00:38:04.004 14:51:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84137 00:38:04.004 14:51:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:04.262 14:51:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:04.520 14:51:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:38:04.521 14:51:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:38:04.779 true 00:38:04.779 14:51:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84137 00:38:04.779 14:51:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:05.724 14:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:05.724 14:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:38:05.724 14:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:38:05.982 true 00:38:05.982 14:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84137 00:38:05.982 14:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:06.240 14:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:06.499 14:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:38:06.499 14:51:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:38:06.499 true 00:38:06.499 14:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84137 00:38:06.499 14:51:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:07.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:07.876 14:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:07.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:07.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:07.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:07.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:07.876 14:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:38:07.876 14:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:38:08.135 true 00:38:08.135 14:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84137 00:38:08.135 14:51:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:09.108 14:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:09.108 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:09.108 14:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:38:09.108 14:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:38:09.365 true 00:38:09.365 14:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84137 00:38:09.365 14:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:09.365 14:51:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:09.624 14:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:38:09.624 14:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:38:09.882 true 00:38:09.882 14:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84137 00:38:09.882 14:51:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:10.820 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:10.820 14:51:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:10.820 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:11.080 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:11.080 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:11.080 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:11.080 14:51:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:38:11.080 14:51:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:38:11.340 true 00:38:11.340 14:51:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84137 00:38:11.340 14:51:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:12.280 14:51:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:12.280 14:51:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:38:12.280 14:51:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:38:12.579 true 00:38:12.579 14:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84137 00:38:12.579 14:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:12.838 14:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:12.838 14:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:38:12.838 14:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:38:13.098 true 00:38:13.098 14:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84137 00:38:13.098 14:51:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:14.035 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:14.035 14:51:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:14.035 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:14.293 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:14.293 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:14.293 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:14.293 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:14.293 14:51:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:38:14.293 14:51:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:38:14.552 true 00:38:14.552 14:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84137 00:38:14.552 14:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:15.495 14:51:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:15.495 14:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:38:15.495 14:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:38:15.755 true 00:38:15.755 14:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84137 00:38:15.755 14:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:16.015 14:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:16.015 14:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:38:16.015 14:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:38:16.275 true 00:38:16.275 14:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84137 00:38:16.275 14:51:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:17.652 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:17.652 14:51:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:17.652 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:17.652 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:17.652 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:17.652 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:17.652 14:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:38:17.652 14:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:38:17.652 true 00:38:17.652 14:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84137 00:38:17.652 14:51:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:18.588 14:51:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:18.847 14:51:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:38:18.847 14:51:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:38:18.847 true 00:38:18.847 14:51:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84137 00:38:18.847 14:51:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:19.106 14:51:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:19.365 14:51:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:38:19.366 14:51:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:38:19.625 true 00:38:19.625 14:51:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84137 00:38:19.625 14:51:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:20.563 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:20.563 14:51:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:20.563 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:20.563 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:20.822 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:20.822 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:20.822 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:20.822 14:51:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:38:20.823 14:51:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:38:21.082 true 00:38:21.082 14:51:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84137 00:38:21.082 14:51:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:22.019 14:51:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:22.019 14:51:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:38:22.019 14:51:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:38:22.278 true 00:38:22.278 14:51:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84137 00:38:22.278 14:51:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:22.278 14:51:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:22.536 14:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:38:22.536 14:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:38:22.795 true 00:38:22.795 14:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84137 00:38:22.795 14:51:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:23.731 14:51:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:23.990 14:51:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:38:23.990 14:51:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:38:24.249 true 00:38:24.249 14:51:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84137 00:38:24.249 14:51:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:24.509 14:51:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:24.509 14:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:38:24.509 14:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:38:24.768 true 00:38:24.768 14:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84137 00:38:24.768 14:51:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:26.147 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:26.147 14:51:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:26.147 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:26.147 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:26.147 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:26.147 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:26.147 14:51:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:38:26.147 14:51:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:38:26.147 true 00:38:26.147 14:51:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84137 00:38:26.147 14:51:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:27.103 14:51:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:27.103 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:38:27.363 14:51:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:38:27.363 14:51:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:38:27.363 true 00:38:27.622 14:51:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84137 00:38:27.622 14:51:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:27.622 14:51:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:27.881 14:51:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:38:27.882 14:51:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:38:28.141 true 00:38:28.141 14:51:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84137 00:38:28.141 14:51:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:29.080 Initializing NVMe Controllers 00:38:29.080 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:29.080 Controller IO queue size 128, less than required. 00:38:29.080 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:29.080 Controller IO queue size 128, less than required. 00:38:29.080 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:29.080 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:38:29.080 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:38:29.080 Initialization complete. Launching workers. 00:38:29.080 ======================================================== 00:38:29.080 Latency(us) 00:38:29.080 Device Information : IOPS MiB/s Average min max 00:38:29.080 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1583.36 0.77 57674.30 2795.08 1076366.31 00:38:29.080 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17020.84 8.31 7520.46 2697.41 527621.31 00:38:29.080 ======================================================== 00:38:29.080 Total : 18604.20 9.08 11788.93 2697.41 1076366.31 00:38:29.080 00:38:29.080 14:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:29.339 14:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:38:29.339 14:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:38:29.599 true 00:38:29.599 14:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 84137 00:38:29.599 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (84137) - No such process 00:38:29.599 14:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 84137 00:38:29.599 14:51:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:29.599 14:51:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:29.857 14:51:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:38:29.857 14:51:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:38:29.857 14:51:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:38:29.857 14:51:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:29.857 14:51:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:38:30.125 null0 00:38:30.125 14:51:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:30.125 14:51:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:30.125 14:51:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:38:30.125 null1 00:38:30.125 14:51:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:30.125 14:51:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:30.125 14:51:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:38:30.406 null2 00:38:30.406 14:51:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:30.406 14:51:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:30.406 14:51:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:38:30.665 null3 00:38:30.665 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:30.665 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:30.665 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:38:30.665 null4 00:38:30.665 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:30.665 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:30.665 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:38:30.924 null5 00:38:30.924 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:30.924 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:30.924 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:38:31.182 null6 00:38:31.182 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:31.182 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:31.182 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:38:31.182 null7 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 85165 85166 85168 85171 85173 85174 85175 85178 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:31.442 14:51:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:31.442 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:31.442 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:31.442 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:31.442 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:31.442 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:31.702 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:31.702 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:31.702 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:31.702 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:31.702 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:31.702 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:31.702 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:31.702 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:31.702 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:31.702 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:31.702 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:31.702 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:31.702 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:31.702 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:31.702 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:31.702 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:31.702 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:31.702 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:31.961 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:31.961 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:31.961 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:31.961 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:31.961 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:31.961 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:31.961 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:31.961 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:31.961 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:31.961 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:31.961 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:31.961 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:31.961 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:31.961 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:31.961 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:31.961 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:32.220 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:32.220 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:32.220 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:32.220 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:32.220 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:32.220 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:32.220 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:32.220 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:32.220 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:32.220 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:32.220 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:32.220 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:32.220 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:32.220 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:32.220 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:32.220 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:32.220 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:32.220 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:32.220 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:32.220 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:32.220 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:32.220 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:32.479 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:32.479 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:32.479 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:32.479 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:32.479 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:32.479 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:32.479 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:32.479 14:51:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:32.479 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:32.479 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:32.479 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:32.479 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:32.479 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:32.738 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:32.738 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:32.738 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:32.738 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:32.738 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:32.738 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:32.738 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:32.738 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:32.738 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:32.738 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:32.738 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:32.738 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:32.738 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:32.738 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:32.738 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:32.738 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:32.738 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:32.738 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:32.738 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:32.739 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:32.739 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:32.997 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:32.997 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:32.997 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:32.997 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:32.997 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:32.997 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:32.997 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:32.997 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:32.997 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:32.997 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:32.997 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:32.997 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:32.997 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:32.997 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:32.997 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:32.997 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:32.997 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:32.997 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:32.997 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:32.998 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:33.258 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:33.258 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:33.258 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:33.258 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:33.258 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:33.258 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:33.258 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:33.258 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:33.258 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:33.258 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:33.258 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:33.258 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:33.258 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:33.258 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:33.258 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:33.519 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:33.519 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:33.519 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:33.519 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:33.519 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:33.519 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:33.519 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:33.519 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:33.519 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:33.519 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:33.519 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:33.519 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:33.519 14:51:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:33.519 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:33.519 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:33.519 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:33.519 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:33.519 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:33.519 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:33.519 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:33.519 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:33.779 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:33.779 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:33.779 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:33.779 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:33.779 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:33.779 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:33.779 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:33.779 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:33.779 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:33.779 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:33.779 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:33.779 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:33.779 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:33.779 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:33.779 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:33.779 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:33.779 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:34.038 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:34.038 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:34.038 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:34.038 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:34.038 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:34.038 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:34.038 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:34.038 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:34.038 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:34.038 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:34.038 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:34.038 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:34.038 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:34.038 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:34.038 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:34.038 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:34.038 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:34.038 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:34.038 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:34.038 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:34.297 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:34.297 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:34.297 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:34.297 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:34.297 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:34.297 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:34.297 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:34.297 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:34.297 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:34.297 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:34.297 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:34.297 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:34.297 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:34.297 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:34.297 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:34.557 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:34.557 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:34.557 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:34.557 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:34.557 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:34.557 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:34.557 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:34.557 14:51:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:34.557 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:34.557 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:34.557 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:34.557 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:34.557 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:34.557 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:34.557 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:34.557 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:34.557 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:34.557 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:34.816 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:34.816 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:34.816 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:34.816 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:34.816 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:34.816 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:34.816 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:34.816 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:34.816 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:34.816 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:34.816 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:34.816 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:34.816 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:34.816 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:34.816 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:34.816 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:34.816 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:34.816 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:34.816 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:34.816 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:34.816 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:34.816 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:34.816 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:34.816 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:35.075 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:35.075 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:35.075 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:35.075 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:35.075 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:35.075 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:35.075 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:35.075 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:35.075 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:35.075 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:35.334 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:35.334 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:35.334 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:35.334 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:35.334 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:35.334 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:38:35.334 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:35.334 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:35.334 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:35.334 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:38:35.334 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:35.334 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:35.334 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:38:35.334 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:35.334 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:35.334 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:38:35.334 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:35.334 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:35.334 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:38:35.334 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:35.334 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:35.334 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:35.335 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:35.594 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:35.594 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:35.594 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:38:35.594 14:51:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:38:35.594 14:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:38:35.594 14:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:38:35.594 14:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:38:35.594 14:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:38:35.594 14:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:35.594 14:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:38:35.594 14:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:35.594 14:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:35.853 14:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:35.853 14:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:35.853 14:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:38:35.853 14:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:35.853 14:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:35.853 14:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:35.853 14:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:35.853 14:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:35.853 14:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:35.853 14:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:35.853 14:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:35.853 14:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:35.853 14:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:35.853 14:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:35.853 14:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:35.853 14:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:38:35.853 14:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:36.112 14:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:38:36.112 14:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:36.112 14:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:36.371 14:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:38:36.371 14:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:38:36.371 14:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:38:36.371 14:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:38:36.371 14:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:36.371 14:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:38:36.371 14:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:36.371 14:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:38:36.371 14:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:36.371 14:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:36.371 rmmod nvme_tcp 00:38:36.371 rmmod nvme_fabrics 00:38:36.371 rmmod nvme_keyring 00:38:36.371 14:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:36.371 14:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:38:36.371 14:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:38:36.371 14:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 84009 ']' 00:38:36.371 14:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 84009 00:38:36.371 14:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # '[' -z 84009 ']' 00:38:36.371 14:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # kill -0 84009 00:38:36.371 14:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # uname 00:38:36.371 14:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:38:36.371 14:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 84009 00:38:36.371 killing process with pid 84009 00:38:36.371 14:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:38:36.371 14:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:38:36.371 14:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 84009' 00:38:36.371 14:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # kill 84009 00:38:36.371 14:51:55 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # wait 84009 00:38:36.631 14:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:38:36.631 14:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:36.631 14:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:36.631 14:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:36.631 14:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:36.631 14:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:36.631 14:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:36.631 14:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:36.631 14:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:38:36.631 00:38:36.631 real 0m41.817s 00:38:36.631 user 3m15.111s 00:38:36.631 sys 0m10.671s 00:38:36.631 14:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:36.631 14:51:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:38:36.631 ************************************ 00:38:36.631 END TEST nvmf_ns_hotplug_stress 00:38:36.631 ************************************ 00:38:36.631 14:51:56 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:38:36.631 14:51:56 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:38:36.631 14:51:56 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:36.631 14:51:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:36.631 ************************************ 00:38:36.631 START TEST nvmf_connect_stress 00:38:36.631 ************************************ 00:38:36.631 14:51:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:38:36.891 * Looking for test storage... 00:38:36.891 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:38:36.891 14:51:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:38:36.891 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:38:36.891 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:36.891 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:36.891 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:36.891 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:36.891 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:36.891 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:36.891 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:36.891 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:36.891 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:36.891 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:36.891 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:38:36.891 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:38:36.891 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:36.891 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:36.891 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:38:36.891 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:36.891 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:36.891 14:51:56 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:36.891 14:51:56 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:36.891 14:51:56 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:36.891 14:51:56 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:36.891 14:51:56 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:36.891 14:51:56 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:36.891 14:51:56 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:38:36.892 Cannot find device "nvmf_tgt_br" 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # true 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:38:36.892 Cannot find device "nvmf_tgt_br2" 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # true 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:38:36.892 Cannot find device "nvmf_tgt_br" 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # true 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:38:36.892 Cannot find device "nvmf_tgt_br2" 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # true 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:38:36.892 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:38:36.892 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:38:36.892 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:38:37.152 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:38:37.152 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:38:37.152 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:38:37.152 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:38:37.152 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:38:37.152 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:38:37.152 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:38:37.152 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:38:37.152 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:38:37.152 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:38:37.152 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:38:37.152 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:38:37.152 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:38:37.152 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:38:37.152 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:38:37.152 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:38:37.152 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:38:37.152 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:38:37.152 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:38:37.152 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:38:37.152 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:38:37.152 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:38:37.152 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:37.152 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:38:37.152 00:38:37.152 --- 10.0.0.2 ping statistics --- 00:38:37.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:37.152 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:38:37.152 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:38:37.152 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:38:37.152 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.102 ms 00:38:37.152 00:38:37.152 --- 10.0.0.3 ping statistics --- 00:38:37.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:37.152 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:38:37.152 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:38:37.152 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:37.152 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:38:37.152 00:38:37.152 --- 10.0.0.1 ping statistics --- 00:38:37.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:37.152 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:38:37.152 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:37.152 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@433 -- # return 0 00:38:37.152 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:38:37.152 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:37.152 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:37.152 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:37.152 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:37.152 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:37.152 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:37.152 14:51:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:38:37.152 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:37.152 14:51:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:38:37.152 14:51:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:38:37.152 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:38:37.152 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=86519 00:38:37.152 14:51:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 86519 00:38:37.152 14:51:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@827 -- # '[' -z 86519 ']' 00:38:37.152 14:51:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:37.152 14:51:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:38:37.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:37.152 14:51:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:37.152 14:51:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:38:37.152 14:51:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:38:37.152 [2024-07-22 14:51:56.727407] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:37.152 [2024-07-22 14:51:56.727469] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:37.412 [2024-07-22 14:51:56.867658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:37.412 [2024-07-22 14:51:56.917586] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:37.412 [2024-07-22 14:51:56.917643] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:37.412 [2024-07-22 14:51:56.917649] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:37.412 [2024-07-22 14:51:56.917654] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:37.412 [2024-07-22 14:51:56.917658] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:37.412 [2024-07-22 14:51:56.917941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:38:37.412 [2024-07-22 14:51:56.917875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:38:37.412 [2024-07-22 14:51:56.917945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:38:37.981 14:51:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:38:37.981 14:51:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@860 -- # return 0 00:38:37.981 14:51:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:37.981 14:51:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:37.981 14:51:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:38:38.240 14:51:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:38.240 14:51:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:38.240 14:51:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:38.240 14:51:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:38:38.240 [2024-07-22 14:51:57.635547] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:38.240 14:51:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:38.240 14:51:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:38.240 14:51:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:38.240 14:51:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:38:38.240 14:51:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:38.240 14:51:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:38.240 14:51:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:38.240 14:51:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:38:38.240 [2024-07-22 14:51:57.660327] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:38.240 14:51:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:38.240 14:51:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:38:38.240 14:51:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:38.241 14:51:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:38:38.241 NULL1 00:38:38.241 14:51:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:38.241 14:51:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=86571 00:38:38.241 14:51:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:38:38.241 14:51:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:38:38.241 14:51:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:38:38.241 14:51:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:38:38.241 14:51:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:38:38.241 14:51:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:38:38.241 14:51:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:38:38.241 14:51:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:38:38.241 14:51:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:38:38.241 14:51:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:38:38.241 14:51:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:38:38.241 14:51:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:38:38.241 14:51:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:38:38.241 14:51:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:38:38.241 14:51:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:38:38.241 14:51:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:38:38.241 14:51:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:38:38.241 14:51:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:38:38.241 14:51:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:38:38.241 14:51:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:38:38.241 14:51:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:38:38.241 14:51:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:38:38.241 14:51:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:38:38.241 14:51:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:38:38.241 14:51:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:38:38.241 14:51:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:38:38.241 14:51:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:38:38.241 14:51:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:38:38.241 14:51:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:38:38.241 14:51:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:38:38.241 14:51:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:38:38.241 14:51:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:38:38.241 14:51:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:38:38.241 14:51:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:38:38.241 14:51:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:38:38.241 14:51:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:38:38.241 14:51:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:38:38.241 14:51:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:38:38.241 14:51:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:38:38.241 14:51:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:38:38.241 14:51:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:38:38.241 14:51:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:38:38.241 14:51:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:38:38.241 14:51:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:38:38.241 14:51:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86571 00:38:38.241 14:51:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:38.241 14:51:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:38.241 14:51:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:38:38.500 14:51:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:38.500 14:51:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86571 00:38:38.500 14:51:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:38.500 14:51:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:38.500 14:51:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:38:39.070 14:51:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:39.070 14:51:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86571 00:38:39.070 14:51:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:39.070 14:51:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:39.070 14:51:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:38:39.329 14:51:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:39.329 14:51:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86571 00:38:39.329 14:51:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:39.329 14:51:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:39.329 14:51:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:38:39.587 14:51:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:39.587 14:51:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86571 00:38:39.587 14:51:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:39.587 14:51:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:39.587 14:51:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:38:39.845 14:51:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:39.845 14:51:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86571 00:38:39.845 14:51:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:39.845 14:51:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:39.845 14:51:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:38:40.412 14:51:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:40.412 14:51:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86571 00:38:40.412 14:51:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:40.412 14:51:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:40.412 14:51:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:38:40.671 14:52:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:40.671 14:52:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86571 00:38:40.671 14:52:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:40.671 14:52:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:40.671 14:52:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:38:40.931 14:52:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:40.931 14:52:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86571 00:38:40.931 14:52:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:40.931 14:52:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:40.931 14:52:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:38:41.191 14:52:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:41.191 14:52:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86571 00:38:41.191 14:52:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:41.191 14:52:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:41.191 14:52:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:38:41.450 14:52:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:41.450 14:52:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86571 00:38:41.450 14:52:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:41.450 14:52:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:41.450 14:52:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:38:42.019 14:52:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:42.019 14:52:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86571 00:38:42.019 14:52:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:42.019 14:52:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:42.019 14:52:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:38:42.279 14:52:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:42.279 14:52:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86571 00:38:42.279 14:52:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:42.279 14:52:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:42.279 14:52:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:38:42.537 14:52:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:42.537 14:52:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86571 00:38:42.537 14:52:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:42.537 14:52:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:42.537 14:52:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:38:42.796 14:52:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:42.796 14:52:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86571 00:38:42.796 14:52:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:42.796 14:52:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:42.796 14:52:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:38:43.055 14:52:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:43.055 14:52:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86571 00:38:43.055 14:52:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:43.055 14:52:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:43.055 14:52:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:38:43.623 14:52:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:43.623 14:52:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86571 00:38:43.623 14:52:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:43.623 14:52:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:43.623 14:52:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:38:43.883 14:52:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:43.883 14:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86571 00:38:43.883 14:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:43.883 14:52:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:43.883 14:52:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:38:44.142 14:52:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:44.142 14:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86571 00:38:44.142 14:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:44.142 14:52:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:44.142 14:52:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:38:44.402 14:52:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:44.402 14:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86571 00:38:44.402 14:52:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:44.402 14:52:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:44.402 14:52:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:38:44.661 14:52:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:44.661 14:52:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86571 00:38:44.661 14:52:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:44.661 14:52:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:44.661 14:52:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:38:45.229 14:52:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:45.229 14:52:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86571 00:38:45.229 14:52:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:45.229 14:52:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:45.229 14:52:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:38:45.489 14:52:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:45.489 14:52:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86571 00:38:45.489 14:52:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:45.489 14:52:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:45.489 14:52:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:38:45.749 14:52:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:45.749 14:52:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86571 00:38:45.749 14:52:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:45.749 14:52:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:45.749 14:52:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:38:46.009 14:52:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:46.009 14:52:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86571 00:38:46.009 14:52:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:46.009 14:52:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:46.009 14:52:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:38:46.578 14:52:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:46.578 14:52:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86571 00:38:46.578 14:52:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:46.578 14:52:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:46.578 14:52:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:38:46.838 14:52:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:46.838 14:52:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86571 00:38:46.838 14:52:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:46.838 14:52:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:46.838 14:52:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:38:47.098 14:52:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:47.098 14:52:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86571 00:38:47.098 14:52:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:47.098 14:52:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:47.098 14:52:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:38:47.393 14:52:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:47.393 14:52:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86571 00:38:47.393 14:52:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:47.393 14:52:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:47.393 14:52:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:38:47.652 14:52:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:47.652 14:52:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86571 00:38:47.652 14:52:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:47.652 14:52:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:47.652 14:52:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:38:47.911 14:52:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:47.911 14:52:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86571 00:38:47.911 14:52:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:47.911 14:52:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:47.911 14:52:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:38:48.480 14:52:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:48.480 14:52:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86571 00:38:48.480 14:52:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:38:48.480 14:52:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:48.480 14:52:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:38:48.480 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:48.739 14:52:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:48.739 14:52:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 86571 00:38:48.739 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (86571) - No such process 00:38:48.739 14:52:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 86571 00:38:48.739 14:52:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:38:48.739 14:52:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:38:48.739 14:52:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:38:48.739 14:52:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:48.739 14:52:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:38:48.739 14:52:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:48.739 14:52:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:38:48.739 14:52:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:48.739 14:52:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:48.739 rmmod nvme_tcp 00:38:48.739 rmmod nvme_fabrics 00:38:48.739 rmmod nvme_keyring 00:38:48.739 14:52:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:48.739 14:52:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:38:48.739 14:52:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:38:48.739 14:52:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 86519 ']' 00:38:48.739 14:52:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 86519 00:38:48.739 14:52:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@946 -- # '[' -z 86519 ']' 00:38:48.739 14:52:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@950 -- # kill -0 86519 00:38:48.739 14:52:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # uname 00:38:48.739 14:52:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:38:48.739 14:52:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86519 00:38:48.739 14:52:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:38:48.739 14:52:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:38:48.739 killing process with pid 86519 00:38:48.739 14:52:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86519' 00:38:48.739 14:52:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # kill 86519 00:38:48.739 14:52:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@970 -- # wait 86519 00:38:48.998 14:52:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:38:48.998 14:52:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:48.998 14:52:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:48.998 14:52:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:48.998 14:52:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:48.998 14:52:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:48.998 14:52:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:48.998 14:52:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:48.998 14:52:08 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:38:48.998 00:38:48.998 real 0m12.359s 00:38:48.998 user 0m41.983s 00:38:48.998 sys 0m2.683s 00:38:48.998 14:52:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:48.998 14:52:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:38:48.998 ************************************ 00:38:48.998 END TEST nvmf_connect_stress 00:38:48.998 ************************************ 00:38:48.998 14:52:08 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:38:48.998 14:52:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:38:48.998 14:52:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:48.998 14:52:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:48.998 ************************************ 00:38:48.998 START TEST nvmf_fused_ordering 00:38:48.998 ************************************ 00:38:48.998 14:52:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:38:49.258 * Looking for test storage... 00:38:49.258 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:38:49.258 14:52:08 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:38:49.258 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:38:49.258 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:49.258 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:49.258 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:49.258 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:49.258 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:49.258 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:49.258 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:49.258 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:49.258 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:49.258 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:49.258 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:38:49.258 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:38:49.258 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:49.258 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:49.258 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:38:49.258 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:49.258 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:49.258 14:52:08 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:49.258 14:52:08 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:49.258 14:52:08 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:49.258 14:52:08 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:49.258 14:52:08 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:49.258 14:52:08 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:49.258 14:52:08 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:38:49.258 14:52:08 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:49.258 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:38:49.258 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:49.258 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:49.258 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:49.259 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:49.259 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:49.259 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:49.259 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:49.259 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:49.259 14:52:08 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:38:49.259 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:49.259 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:49.259 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:49.259 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:49.259 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:49.259 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:49.259 14:52:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:49.259 14:52:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:49.259 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:38:49.259 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:38:49.259 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:38:49.259 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:38:49.259 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:38:49.259 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@432 -- # nvmf_veth_init 00:38:49.259 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:49.259 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:49.259 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:38:49.259 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:38:49.259 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:38:49.259 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:38:49.259 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:38:49.259 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:49.259 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:38:49.259 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:38:49.259 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:38:49.259 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:38:49.259 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:38:49.259 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:38:49.259 Cannot find device "nvmf_tgt_br" 00:38:49.259 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # true 00:38:49.259 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:38:49.259 Cannot find device "nvmf_tgt_br2" 00:38:49.259 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # true 00:38:49.259 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:38:49.259 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:38:49.259 Cannot find device "nvmf_tgt_br" 00:38:49.259 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # true 00:38:49.259 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:38:49.259 Cannot find device "nvmf_tgt_br2" 00:38:49.259 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # true 00:38:49.259 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:38:49.519 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:38:49.519 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:38:49.519 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:49.519 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:38:49.519 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:38:49.519 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:49.519 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:38:49.519 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:38:49.519 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:38:49.519 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:38:49.519 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:38:49.519 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:38:49.519 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:38:49.519 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:38:49.519 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:38:49.519 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:38:49.519 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:38:49.519 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:38:49.519 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:38:49.519 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:38:49.519 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:38:49.519 14:52:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:38:49.519 14:52:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:38:49.519 14:52:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:38:49.519 14:52:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:38:49.519 14:52:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:38:49.519 14:52:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:38:49.519 14:52:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:38:49.519 14:52:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:38:49.519 14:52:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:38:49.519 14:52:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:38:49.519 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:49.519 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:38:49.519 00:38:49.519 --- 10.0.0.2 ping statistics --- 00:38:49.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:49.519 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:38:49.519 14:52:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:38:49.519 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:38:49.519 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.030 ms 00:38:49.519 00:38:49.519 --- 10.0.0.3 ping statistics --- 00:38:49.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:49.519 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:38:49.519 14:52:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:38:49.519 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:49.519 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.013 ms 00:38:49.519 00:38:49.519 --- 10.0.0.1 ping statistics --- 00:38:49.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:49.519 rtt min/avg/max/mdev = 0.013/0.013/0.013/0.000 ms 00:38:49.519 14:52:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:49.519 14:52:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@433 -- # return 0 00:38:49.519 14:52:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:38:49.520 14:52:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:49.520 14:52:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:49.520 14:52:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:49.520 14:52:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:49.520 14:52:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:49.520 14:52:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:49.520 14:52:09 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:38:49.520 14:52:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:49.520 14:52:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@720 -- # xtrace_disable 00:38:49.520 14:52:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:38:49.520 14:52:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=86902 00:38:49.520 14:52:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:38:49.520 14:52:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 86902 00:38:49.520 14:52:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # '[' -z 86902 ']' 00:38:49.520 14:52:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:49.520 14:52:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local max_retries=100 00:38:49.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:49.520 14:52:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:49.520 14:52:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # xtrace_disable 00:38:49.520 14:52:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:38:49.520 [2024-07-22 14:52:09.129709] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:49.520 [2024-07-22 14:52:09.129789] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:49.779 [2024-07-22 14:52:09.269512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:49.779 [2024-07-22 14:52:09.316337] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:49.779 [2024-07-22 14:52:09.316397] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:49.779 [2024-07-22 14:52:09.316405] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:49.779 [2024-07-22 14:52:09.316410] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:49.779 [2024-07-22 14:52:09.316415] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:49.779 [2024-07-22 14:52:09.316434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:38:50.348 14:52:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:38:50.348 14:52:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # return 0 00:38:50.348 14:52:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:50.348 14:52:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:50.348 14:52:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:38:50.606 14:52:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:50.606 14:52:09 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:50.606 14:52:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:50.606 14:52:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:38:50.606 [2024-07-22 14:52:10.008652] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:50.606 14:52:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:50.606 14:52:10 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:50.606 14:52:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:50.606 14:52:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:38:50.606 14:52:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:50.606 14:52:10 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:50.606 14:52:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:50.606 14:52:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:38:50.606 [2024-07-22 14:52:10.032705] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:50.606 14:52:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:50.606 14:52:10 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:38:50.606 14:52:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:50.606 14:52:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:38:50.606 NULL1 00:38:50.606 14:52:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:50.606 14:52:10 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:38:50.606 14:52:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:50.606 14:52:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:38:50.606 14:52:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:50.606 14:52:10 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:38:50.606 14:52:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:50.606 14:52:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:38:50.606 14:52:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:50.606 14:52:10 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:38:50.607 [2024-07-22 14:52:10.102287] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:50.607 [2024-07-22 14:52:10.102327] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86956 ] 00:38:50.874 Attached to nqn.2016-06.io.spdk:cnode1 00:38:50.874 Namespace ID: 1 size: 1GB 00:38:50.874 fused_ordering(0) 00:38:50.874 fused_ordering(1) 00:38:50.874 fused_ordering(2) 00:38:50.874 fused_ordering(3) 00:38:50.874 fused_ordering(4) 00:38:50.874 fused_ordering(5) 00:38:50.874 fused_ordering(6) 00:38:50.874 fused_ordering(7) 00:38:50.874 fused_ordering(8) 00:38:50.874 fused_ordering(9) 00:38:50.874 fused_ordering(10) 00:38:50.874 fused_ordering(11) 00:38:50.874 fused_ordering(12) 00:38:50.874 fused_ordering(13) 00:38:50.874 fused_ordering(14) 00:38:50.874 fused_ordering(15) 00:38:50.874 fused_ordering(16) 00:38:50.874 fused_ordering(17) 00:38:50.874 fused_ordering(18) 00:38:50.874 fused_ordering(19) 00:38:50.874 fused_ordering(20) 00:38:50.874 fused_ordering(21) 00:38:50.874 fused_ordering(22) 00:38:50.874 fused_ordering(23) 00:38:50.874 fused_ordering(24) 00:38:50.874 fused_ordering(25) 00:38:50.874 fused_ordering(26) 00:38:50.874 fused_ordering(27) 00:38:50.874 fused_ordering(28) 00:38:50.874 fused_ordering(29) 00:38:50.874 fused_ordering(30) 00:38:50.874 fused_ordering(31) 00:38:50.874 fused_ordering(32) 00:38:50.874 fused_ordering(33) 00:38:50.874 fused_ordering(34) 00:38:50.874 fused_ordering(35) 00:38:50.874 fused_ordering(36) 00:38:50.874 fused_ordering(37) 00:38:50.874 fused_ordering(38) 00:38:50.874 fused_ordering(39) 00:38:50.874 fused_ordering(40) 00:38:50.874 fused_ordering(41) 00:38:50.874 fused_ordering(42) 00:38:50.874 fused_ordering(43) 00:38:50.874 fused_ordering(44) 00:38:50.874 fused_ordering(45) 00:38:50.874 fused_ordering(46) 00:38:50.874 fused_ordering(47) 00:38:50.874 fused_ordering(48) 00:38:50.874 fused_ordering(49) 00:38:50.874 fused_ordering(50) 00:38:50.874 fused_ordering(51) 00:38:50.874 fused_ordering(52) 00:38:50.874 fused_ordering(53) 00:38:50.874 fused_ordering(54) 00:38:50.874 fused_ordering(55) 00:38:50.874 fused_ordering(56) 00:38:50.874 fused_ordering(57) 00:38:50.874 fused_ordering(58) 00:38:50.874 fused_ordering(59) 00:38:50.874 fused_ordering(60) 00:38:50.874 fused_ordering(61) 00:38:50.874 fused_ordering(62) 00:38:50.874 fused_ordering(63) 00:38:50.874 fused_ordering(64) 00:38:50.874 fused_ordering(65) 00:38:50.874 fused_ordering(66) 00:38:50.874 fused_ordering(67) 00:38:50.874 fused_ordering(68) 00:38:50.874 fused_ordering(69) 00:38:50.874 fused_ordering(70) 00:38:50.874 fused_ordering(71) 00:38:50.874 fused_ordering(72) 00:38:50.874 fused_ordering(73) 00:38:50.874 fused_ordering(74) 00:38:50.874 fused_ordering(75) 00:38:50.874 fused_ordering(76) 00:38:50.875 fused_ordering(77) 00:38:50.875 fused_ordering(78) 00:38:50.875 fused_ordering(79) 00:38:50.875 fused_ordering(80) 00:38:50.875 fused_ordering(81) 00:38:50.875 fused_ordering(82) 00:38:50.875 fused_ordering(83) 00:38:50.875 fused_ordering(84) 00:38:50.875 fused_ordering(85) 00:38:50.875 fused_ordering(86) 00:38:50.875 fused_ordering(87) 00:38:50.875 fused_ordering(88) 00:38:50.875 fused_ordering(89) 00:38:50.875 fused_ordering(90) 00:38:50.875 fused_ordering(91) 00:38:50.875 fused_ordering(92) 00:38:50.875 fused_ordering(93) 00:38:50.875 fused_ordering(94) 00:38:50.875 fused_ordering(95) 00:38:50.875 fused_ordering(96) 00:38:50.875 fused_ordering(97) 00:38:50.875 fused_ordering(98) 00:38:50.875 fused_ordering(99) 00:38:50.875 fused_ordering(100) 00:38:50.875 fused_ordering(101) 00:38:50.875 fused_ordering(102) 00:38:50.875 fused_ordering(103) 00:38:50.875 fused_ordering(104) 00:38:50.875 fused_ordering(105) 00:38:50.875 fused_ordering(106) 00:38:50.875 fused_ordering(107) 00:38:50.875 fused_ordering(108) 00:38:50.875 fused_ordering(109) 00:38:50.875 fused_ordering(110) 00:38:50.875 fused_ordering(111) 00:38:50.875 fused_ordering(112) 00:38:50.875 fused_ordering(113) 00:38:50.875 fused_ordering(114) 00:38:50.875 fused_ordering(115) 00:38:50.875 fused_ordering(116) 00:38:50.875 fused_ordering(117) 00:38:50.875 fused_ordering(118) 00:38:50.875 fused_ordering(119) 00:38:50.875 fused_ordering(120) 00:38:50.875 fused_ordering(121) 00:38:50.875 fused_ordering(122) 00:38:50.875 fused_ordering(123) 00:38:50.875 fused_ordering(124) 00:38:50.875 fused_ordering(125) 00:38:50.875 fused_ordering(126) 00:38:50.875 fused_ordering(127) 00:38:50.875 fused_ordering(128) 00:38:50.875 fused_ordering(129) 00:38:50.875 fused_ordering(130) 00:38:50.875 fused_ordering(131) 00:38:50.875 fused_ordering(132) 00:38:50.875 fused_ordering(133) 00:38:50.875 fused_ordering(134) 00:38:50.875 fused_ordering(135) 00:38:50.875 fused_ordering(136) 00:38:50.875 fused_ordering(137) 00:38:50.875 fused_ordering(138) 00:38:50.875 fused_ordering(139) 00:38:50.875 fused_ordering(140) 00:38:50.875 fused_ordering(141) 00:38:50.875 fused_ordering(142) 00:38:50.875 fused_ordering(143) 00:38:50.875 fused_ordering(144) 00:38:50.875 fused_ordering(145) 00:38:50.875 fused_ordering(146) 00:38:50.875 fused_ordering(147) 00:38:50.875 fused_ordering(148) 00:38:50.875 fused_ordering(149) 00:38:50.875 fused_ordering(150) 00:38:50.875 fused_ordering(151) 00:38:50.875 fused_ordering(152) 00:38:50.875 fused_ordering(153) 00:38:50.875 fused_ordering(154) 00:38:50.875 fused_ordering(155) 00:38:50.875 fused_ordering(156) 00:38:50.875 fused_ordering(157) 00:38:50.875 fused_ordering(158) 00:38:50.875 fused_ordering(159) 00:38:50.875 fused_ordering(160) 00:38:50.875 fused_ordering(161) 00:38:50.875 fused_ordering(162) 00:38:50.875 fused_ordering(163) 00:38:50.875 fused_ordering(164) 00:38:50.875 fused_ordering(165) 00:38:50.875 fused_ordering(166) 00:38:50.875 fused_ordering(167) 00:38:50.875 fused_ordering(168) 00:38:50.875 fused_ordering(169) 00:38:50.875 fused_ordering(170) 00:38:50.875 fused_ordering(171) 00:38:50.875 fused_ordering(172) 00:38:50.875 fused_ordering(173) 00:38:50.875 fused_ordering(174) 00:38:50.875 fused_ordering(175) 00:38:50.875 fused_ordering(176) 00:38:50.875 fused_ordering(177) 00:38:50.875 fused_ordering(178) 00:38:50.875 fused_ordering(179) 00:38:50.875 fused_ordering(180) 00:38:50.875 fused_ordering(181) 00:38:50.875 fused_ordering(182) 00:38:50.875 fused_ordering(183) 00:38:50.875 fused_ordering(184) 00:38:50.875 fused_ordering(185) 00:38:50.875 fused_ordering(186) 00:38:50.875 fused_ordering(187) 00:38:50.875 fused_ordering(188) 00:38:50.875 fused_ordering(189) 00:38:50.875 fused_ordering(190) 00:38:50.875 fused_ordering(191) 00:38:50.875 fused_ordering(192) 00:38:50.875 fused_ordering(193) 00:38:50.875 fused_ordering(194) 00:38:50.875 fused_ordering(195) 00:38:50.875 fused_ordering(196) 00:38:50.875 fused_ordering(197) 00:38:50.875 fused_ordering(198) 00:38:50.875 fused_ordering(199) 00:38:50.875 fused_ordering(200) 00:38:50.875 fused_ordering(201) 00:38:50.875 fused_ordering(202) 00:38:50.875 fused_ordering(203) 00:38:50.875 fused_ordering(204) 00:38:50.875 fused_ordering(205) 00:38:51.135 fused_ordering(206) 00:38:51.135 fused_ordering(207) 00:38:51.135 fused_ordering(208) 00:38:51.135 fused_ordering(209) 00:38:51.135 fused_ordering(210) 00:38:51.135 fused_ordering(211) 00:38:51.135 fused_ordering(212) 00:38:51.135 fused_ordering(213) 00:38:51.135 fused_ordering(214) 00:38:51.135 fused_ordering(215) 00:38:51.135 fused_ordering(216) 00:38:51.135 fused_ordering(217) 00:38:51.135 fused_ordering(218) 00:38:51.135 fused_ordering(219) 00:38:51.135 fused_ordering(220) 00:38:51.135 fused_ordering(221) 00:38:51.135 fused_ordering(222) 00:38:51.135 fused_ordering(223) 00:38:51.135 fused_ordering(224) 00:38:51.135 fused_ordering(225) 00:38:51.135 fused_ordering(226) 00:38:51.135 fused_ordering(227) 00:38:51.135 fused_ordering(228) 00:38:51.135 fused_ordering(229) 00:38:51.135 fused_ordering(230) 00:38:51.135 fused_ordering(231) 00:38:51.135 fused_ordering(232) 00:38:51.135 fused_ordering(233) 00:38:51.135 fused_ordering(234) 00:38:51.135 fused_ordering(235) 00:38:51.135 fused_ordering(236) 00:38:51.135 fused_ordering(237) 00:38:51.135 fused_ordering(238) 00:38:51.135 fused_ordering(239) 00:38:51.135 fused_ordering(240) 00:38:51.135 fused_ordering(241) 00:38:51.135 fused_ordering(242) 00:38:51.135 fused_ordering(243) 00:38:51.135 fused_ordering(244) 00:38:51.135 fused_ordering(245) 00:38:51.135 fused_ordering(246) 00:38:51.135 fused_ordering(247) 00:38:51.135 fused_ordering(248) 00:38:51.135 fused_ordering(249) 00:38:51.135 fused_ordering(250) 00:38:51.135 fused_ordering(251) 00:38:51.135 fused_ordering(252) 00:38:51.135 fused_ordering(253) 00:38:51.135 fused_ordering(254) 00:38:51.135 fused_ordering(255) 00:38:51.135 fused_ordering(256) 00:38:51.135 fused_ordering(257) 00:38:51.135 fused_ordering(258) 00:38:51.135 fused_ordering(259) 00:38:51.135 fused_ordering(260) 00:38:51.135 fused_ordering(261) 00:38:51.135 fused_ordering(262) 00:38:51.135 fused_ordering(263) 00:38:51.135 fused_ordering(264) 00:38:51.135 fused_ordering(265) 00:38:51.135 fused_ordering(266) 00:38:51.135 fused_ordering(267) 00:38:51.135 fused_ordering(268) 00:38:51.135 fused_ordering(269) 00:38:51.135 fused_ordering(270) 00:38:51.135 fused_ordering(271) 00:38:51.135 fused_ordering(272) 00:38:51.135 fused_ordering(273) 00:38:51.135 fused_ordering(274) 00:38:51.135 fused_ordering(275) 00:38:51.135 fused_ordering(276) 00:38:51.135 fused_ordering(277) 00:38:51.135 fused_ordering(278) 00:38:51.135 fused_ordering(279) 00:38:51.135 fused_ordering(280) 00:38:51.135 fused_ordering(281) 00:38:51.135 fused_ordering(282) 00:38:51.135 fused_ordering(283) 00:38:51.135 fused_ordering(284) 00:38:51.135 fused_ordering(285) 00:38:51.135 fused_ordering(286) 00:38:51.135 fused_ordering(287) 00:38:51.135 fused_ordering(288) 00:38:51.135 fused_ordering(289) 00:38:51.135 fused_ordering(290) 00:38:51.135 fused_ordering(291) 00:38:51.135 fused_ordering(292) 00:38:51.135 fused_ordering(293) 00:38:51.135 fused_ordering(294) 00:38:51.135 fused_ordering(295) 00:38:51.135 fused_ordering(296) 00:38:51.135 fused_ordering(297) 00:38:51.135 fused_ordering(298) 00:38:51.135 fused_ordering(299) 00:38:51.135 fused_ordering(300) 00:38:51.135 fused_ordering(301) 00:38:51.135 fused_ordering(302) 00:38:51.135 fused_ordering(303) 00:38:51.135 fused_ordering(304) 00:38:51.135 fused_ordering(305) 00:38:51.135 fused_ordering(306) 00:38:51.135 fused_ordering(307) 00:38:51.135 fused_ordering(308) 00:38:51.135 fused_ordering(309) 00:38:51.135 fused_ordering(310) 00:38:51.135 fused_ordering(311) 00:38:51.135 fused_ordering(312) 00:38:51.135 fused_ordering(313) 00:38:51.135 fused_ordering(314) 00:38:51.135 fused_ordering(315) 00:38:51.135 fused_ordering(316) 00:38:51.135 fused_ordering(317) 00:38:51.135 fused_ordering(318) 00:38:51.135 fused_ordering(319) 00:38:51.135 fused_ordering(320) 00:38:51.135 fused_ordering(321) 00:38:51.135 fused_ordering(322) 00:38:51.135 fused_ordering(323) 00:38:51.135 fused_ordering(324) 00:38:51.135 fused_ordering(325) 00:38:51.135 fused_ordering(326) 00:38:51.135 fused_ordering(327) 00:38:51.135 fused_ordering(328) 00:38:51.135 fused_ordering(329) 00:38:51.135 fused_ordering(330) 00:38:51.135 fused_ordering(331) 00:38:51.135 fused_ordering(332) 00:38:51.135 fused_ordering(333) 00:38:51.135 fused_ordering(334) 00:38:51.135 fused_ordering(335) 00:38:51.135 fused_ordering(336) 00:38:51.135 fused_ordering(337) 00:38:51.135 fused_ordering(338) 00:38:51.135 fused_ordering(339) 00:38:51.135 fused_ordering(340) 00:38:51.135 fused_ordering(341) 00:38:51.135 fused_ordering(342) 00:38:51.135 fused_ordering(343) 00:38:51.135 fused_ordering(344) 00:38:51.135 fused_ordering(345) 00:38:51.135 fused_ordering(346) 00:38:51.135 fused_ordering(347) 00:38:51.135 fused_ordering(348) 00:38:51.135 fused_ordering(349) 00:38:51.135 fused_ordering(350) 00:38:51.135 fused_ordering(351) 00:38:51.135 fused_ordering(352) 00:38:51.135 fused_ordering(353) 00:38:51.135 fused_ordering(354) 00:38:51.135 fused_ordering(355) 00:38:51.135 fused_ordering(356) 00:38:51.135 fused_ordering(357) 00:38:51.135 fused_ordering(358) 00:38:51.135 fused_ordering(359) 00:38:51.135 fused_ordering(360) 00:38:51.135 fused_ordering(361) 00:38:51.135 fused_ordering(362) 00:38:51.135 fused_ordering(363) 00:38:51.135 fused_ordering(364) 00:38:51.135 fused_ordering(365) 00:38:51.135 fused_ordering(366) 00:38:51.135 fused_ordering(367) 00:38:51.135 fused_ordering(368) 00:38:51.135 fused_ordering(369) 00:38:51.135 fused_ordering(370) 00:38:51.135 fused_ordering(371) 00:38:51.135 fused_ordering(372) 00:38:51.135 fused_ordering(373) 00:38:51.135 fused_ordering(374) 00:38:51.135 fused_ordering(375) 00:38:51.135 fused_ordering(376) 00:38:51.135 fused_ordering(377) 00:38:51.135 fused_ordering(378) 00:38:51.135 fused_ordering(379) 00:38:51.135 fused_ordering(380) 00:38:51.135 fused_ordering(381) 00:38:51.135 fused_ordering(382) 00:38:51.135 fused_ordering(383) 00:38:51.135 fused_ordering(384) 00:38:51.135 fused_ordering(385) 00:38:51.135 fused_ordering(386) 00:38:51.135 fused_ordering(387) 00:38:51.135 fused_ordering(388) 00:38:51.135 fused_ordering(389) 00:38:51.135 fused_ordering(390) 00:38:51.135 fused_ordering(391) 00:38:51.135 fused_ordering(392) 00:38:51.135 fused_ordering(393) 00:38:51.135 fused_ordering(394) 00:38:51.135 fused_ordering(395) 00:38:51.135 fused_ordering(396) 00:38:51.135 fused_ordering(397) 00:38:51.135 fused_ordering(398) 00:38:51.135 fused_ordering(399) 00:38:51.135 fused_ordering(400) 00:38:51.135 fused_ordering(401) 00:38:51.135 fused_ordering(402) 00:38:51.135 fused_ordering(403) 00:38:51.135 fused_ordering(404) 00:38:51.135 fused_ordering(405) 00:38:51.135 fused_ordering(406) 00:38:51.135 fused_ordering(407) 00:38:51.135 fused_ordering(408) 00:38:51.135 fused_ordering(409) 00:38:51.135 fused_ordering(410) 00:38:51.394 fused_ordering(411) 00:38:51.394 fused_ordering(412) 00:38:51.394 fused_ordering(413) 00:38:51.394 fused_ordering(414) 00:38:51.394 fused_ordering(415) 00:38:51.394 fused_ordering(416) 00:38:51.394 fused_ordering(417) 00:38:51.394 fused_ordering(418) 00:38:51.394 fused_ordering(419) 00:38:51.394 fused_ordering(420) 00:38:51.394 fused_ordering(421) 00:38:51.394 fused_ordering(422) 00:38:51.394 fused_ordering(423) 00:38:51.394 fused_ordering(424) 00:38:51.394 fused_ordering(425) 00:38:51.394 fused_ordering(426) 00:38:51.394 fused_ordering(427) 00:38:51.394 fused_ordering(428) 00:38:51.394 fused_ordering(429) 00:38:51.394 fused_ordering(430) 00:38:51.394 fused_ordering(431) 00:38:51.394 fused_ordering(432) 00:38:51.394 fused_ordering(433) 00:38:51.394 fused_ordering(434) 00:38:51.394 fused_ordering(435) 00:38:51.394 fused_ordering(436) 00:38:51.394 fused_ordering(437) 00:38:51.394 fused_ordering(438) 00:38:51.394 fused_ordering(439) 00:38:51.394 fused_ordering(440) 00:38:51.394 fused_ordering(441) 00:38:51.394 fused_ordering(442) 00:38:51.394 fused_ordering(443) 00:38:51.394 fused_ordering(444) 00:38:51.394 fused_ordering(445) 00:38:51.394 fused_ordering(446) 00:38:51.394 fused_ordering(447) 00:38:51.394 fused_ordering(448) 00:38:51.394 fused_ordering(449) 00:38:51.394 fused_ordering(450) 00:38:51.394 fused_ordering(451) 00:38:51.394 fused_ordering(452) 00:38:51.394 fused_ordering(453) 00:38:51.394 fused_ordering(454) 00:38:51.394 fused_ordering(455) 00:38:51.394 fused_ordering(456) 00:38:51.394 fused_ordering(457) 00:38:51.394 fused_ordering(458) 00:38:51.394 fused_ordering(459) 00:38:51.394 fused_ordering(460) 00:38:51.394 fused_ordering(461) 00:38:51.394 fused_ordering(462) 00:38:51.394 fused_ordering(463) 00:38:51.394 fused_ordering(464) 00:38:51.394 fused_ordering(465) 00:38:51.394 fused_ordering(466) 00:38:51.394 fused_ordering(467) 00:38:51.394 fused_ordering(468) 00:38:51.394 fused_ordering(469) 00:38:51.394 fused_ordering(470) 00:38:51.394 fused_ordering(471) 00:38:51.394 fused_ordering(472) 00:38:51.394 fused_ordering(473) 00:38:51.394 fused_ordering(474) 00:38:51.394 fused_ordering(475) 00:38:51.394 fused_ordering(476) 00:38:51.394 fused_ordering(477) 00:38:51.394 fused_ordering(478) 00:38:51.394 fused_ordering(479) 00:38:51.394 fused_ordering(480) 00:38:51.394 fused_ordering(481) 00:38:51.394 fused_ordering(482) 00:38:51.394 fused_ordering(483) 00:38:51.394 fused_ordering(484) 00:38:51.394 fused_ordering(485) 00:38:51.394 fused_ordering(486) 00:38:51.394 fused_ordering(487) 00:38:51.394 fused_ordering(488) 00:38:51.394 fused_ordering(489) 00:38:51.394 fused_ordering(490) 00:38:51.394 fused_ordering(491) 00:38:51.394 fused_ordering(492) 00:38:51.394 fused_ordering(493) 00:38:51.394 fused_ordering(494) 00:38:51.394 fused_ordering(495) 00:38:51.394 fused_ordering(496) 00:38:51.394 fused_ordering(497) 00:38:51.394 fused_ordering(498) 00:38:51.394 fused_ordering(499) 00:38:51.394 fused_ordering(500) 00:38:51.394 fused_ordering(501) 00:38:51.394 fused_ordering(502) 00:38:51.394 fused_ordering(503) 00:38:51.394 fused_ordering(504) 00:38:51.394 fused_ordering(505) 00:38:51.394 fused_ordering(506) 00:38:51.394 fused_ordering(507) 00:38:51.394 fused_ordering(508) 00:38:51.394 fused_ordering(509) 00:38:51.394 fused_ordering(510) 00:38:51.394 fused_ordering(511) 00:38:51.394 fused_ordering(512) 00:38:51.394 fused_ordering(513) 00:38:51.394 fused_ordering(514) 00:38:51.394 fused_ordering(515) 00:38:51.394 fused_ordering(516) 00:38:51.394 fused_ordering(517) 00:38:51.394 fused_ordering(518) 00:38:51.394 fused_ordering(519) 00:38:51.394 fused_ordering(520) 00:38:51.394 fused_ordering(521) 00:38:51.394 fused_ordering(522) 00:38:51.394 fused_ordering(523) 00:38:51.394 fused_ordering(524) 00:38:51.394 fused_ordering(525) 00:38:51.394 fused_ordering(526) 00:38:51.394 fused_ordering(527) 00:38:51.394 fused_ordering(528) 00:38:51.394 fused_ordering(529) 00:38:51.394 fused_ordering(530) 00:38:51.394 fused_ordering(531) 00:38:51.394 fused_ordering(532) 00:38:51.394 fused_ordering(533) 00:38:51.394 fused_ordering(534) 00:38:51.394 fused_ordering(535) 00:38:51.394 fused_ordering(536) 00:38:51.394 fused_ordering(537) 00:38:51.394 fused_ordering(538) 00:38:51.394 fused_ordering(539) 00:38:51.394 fused_ordering(540) 00:38:51.394 fused_ordering(541) 00:38:51.394 fused_ordering(542) 00:38:51.394 fused_ordering(543) 00:38:51.394 fused_ordering(544) 00:38:51.394 fused_ordering(545) 00:38:51.394 fused_ordering(546) 00:38:51.394 fused_ordering(547) 00:38:51.394 fused_ordering(548) 00:38:51.394 fused_ordering(549) 00:38:51.394 fused_ordering(550) 00:38:51.394 fused_ordering(551) 00:38:51.394 fused_ordering(552) 00:38:51.394 fused_ordering(553) 00:38:51.394 fused_ordering(554) 00:38:51.394 fused_ordering(555) 00:38:51.394 fused_ordering(556) 00:38:51.394 fused_ordering(557) 00:38:51.394 fused_ordering(558) 00:38:51.394 fused_ordering(559) 00:38:51.394 fused_ordering(560) 00:38:51.394 fused_ordering(561) 00:38:51.394 fused_ordering(562) 00:38:51.394 fused_ordering(563) 00:38:51.395 fused_ordering(564) 00:38:51.395 fused_ordering(565) 00:38:51.395 fused_ordering(566) 00:38:51.395 fused_ordering(567) 00:38:51.395 fused_ordering(568) 00:38:51.395 fused_ordering(569) 00:38:51.395 fused_ordering(570) 00:38:51.395 fused_ordering(571) 00:38:51.395 fused_ordering(572) 00:38:51.395 fused_ordering(573) 00:38:51.395 fused_ordering(574) 00:38:51.395 fused_ordering(575) 00:38:51.395 fused_ordering(576) 00:38:51.395 fused_ordering(577) 00:38:51.395 fused_ordering(578) 00:38:51.395 fused_ordering(579) 00:38:51.395 fused_ordering(580) 00:38:51.395 fused_ordering(581) 00:38:51.395 fused_ordering(582) 00:38:51.395 fused_ordering(583) 00:38:51.395 fused_ordering(584) 00:38:51.395 fused_ordering(585) 00:38:51.395 fused_ordering(586) 00:38:51.395 fused_ordering(587) 00:38:51.395 fused_ordering(588) 00:38:51.395 fused_ordering(589) 00:38:51.395 fused_ordering(590) 00:38:51.395 fused_ordering(591) 00:38:51.395 fused_ordering(592) 00:38:51.395 fused_ordering(593) 00:38:51.395 fused_ordering(594) 00:38:51.395 fused_ordering(595) 00:38:51.395 fused_ordering(596) 00:38:51.395 fused_ordering(597) 00:38:51.395 fused_ordering(598) 00:38:51.395 fused_ordering(599) 00:38:51.395 fused_ordering(600) 00:38:51.395 fused_ordering(601) 00:38:51.395 fused_ordering(602) 00:38:51.395 fused_ordering(603) 00:38:51.395 fused_ordering(604) 00:38:51.395 fused_ordering(605) 00:38:51.395 fused_ordering(606) 00:38:51.395 fused_ordering(607) 00:38:51.395 fused_ordering(608) 00:38:51.395 fused_ordering(609) 00:38:51.395 fused_ordering(610) 00:38:51.395 fused_ordering(611) 00:38:51.395 fused_ordering(612) 00:38:51.395 fused_ordering(613) 00:38:51.395 fused_ordering(614) 00:38:51.395 fused_ordering(615) 00:38:51.654 fused_ordering(616) 00:38:51.654 fused_ordering(617) 00:38:51.654 fused_ordering(618) 00:38:51.654 fused_ordering(619) 00:38:51.654 fused_ordering(620) 00:38:51.654 fused_ordering(621) 00:38:51.654 fused_ordering(622) 00:38:51.654 fused_ordering(623) 00:38:51.654 fused_ordering(624) 00:38:51.654 fused_ordering(625) 00:38:51.654 fused_ordering(626) 00:38:51.654 fused_ordering(627) 00:38:51.654 fused_ordering(628) 00:38:51.654 fused_ordering(629) 00:38:51.654 fused_ordering(630) 00:38:51.654 fused_ordering(631) 00:38:51.654 fused_ordering(632) 00:38:51.654 fused_ordering(633) 00:38:51.654 fused_ordering(634) 00:38:51.654 fused_ordering(635) 00:38:51.654 fused_ordering(636) 00:38:51.654 fused_ordering(637) 00:38:51.654 fused_ordering(638) 00:38:51.654 fused_ordering(639) 00:38:51.654 fused_ordering(640) 00:38:51.654 fused_ordering(641) 00:38:51.654 fused_ordering(642) 00:38:51.654 fused_ordering(643) 00:38:51.654 fused_ordering(644) 00:38:51.654 fused_ordering(645) 00:38:51.654 fused_ordering(646) 00:38:51.654 fused_ordering(647) 00:38:51.654 fused_ordering(648) 00:38:51.654 fused_ordering(649) 00:38:51.654 fused_ordering(650) 00:38:51.654 fused_ordering(651) 00:38:51.654 fused_ordering(652) 00:38:51.654 fused_ordering(653) 00:38:51.654 fused_ordering(654) 00:38:51.654 fused_ordering(655) 00:38:51.654 fused_ordering(656) 00:38:51.654 fused_ordering(657) 00:38:51.654 fused_ordering(658) 00:38:51.654 fused_ordering(659) 00:38:51.654 fused_ordering(660) 00:38:51.654 fused_ordering(661) 00:38:51.654 fused_ordering(662) 00:38:51.654 fused_ordering(663) 00:38:51.654 fused_ordering(664) 00:38:51.654 fused_ordering(665) 00:38:51.654 fused_ordering(666) 00:38:51.654 fused_ordering(667) 00:38:51.654 fused_ordering(668) 00:38:51.654 fused_ordering(669) 00:38:51.654 fused_ordering(670) 00:38:51.654 fused_ordering(671) 00:38:51.654 fused_ordering(672) 00:38:51.654 fused_ordering(673) 00:38:51.654 fused_ordering(674) 00:38:51.654 fused_ordering(675) 00:38:51.654 fused_ordering(676) 00:38:51.654 fused_ordering(677) 00:38:51.654 fused_ordering(678) 00:38:51.654 fused_ordering(679) 00:38:51.654 fused_ordering(680) 00:38:51.654 fused_ordering(681) 00:38:51.654 fused_ordering(682) 00:38:51.654 fused_ordering(683) 00:38:51.654 fused_ordering(684) 00:38:51.654 fused_ordering(685) 00:38:51.654 fused_ordering(686) 00:38:51.654 fused_ordering(687) 00:38:51.654 fused_ordering(688) 00:38:51.654 fused_ordering(689) 00:38:51.654 fused_ordering(690) 00:38:51.654 fused_ordering(691) 00:38:51.654 fused_ordering(692) 00:38:51.654 fused_ordering(693) 00:38:51.654 fused_ordering(694) 00:38:51.654 fused_ordering(695) 00:38:51.654 fused_ordering(696) 00:38:51.654 fused_ordering(697) 00:38:51.654 fused_ordering(698) 00:38:51.654 fused_ordering(699) 00:38:51.654 fused_ordering(700) 00:38:51.654 fused_ordering(701) 00:38:51.654 fused_ordering(702) 00:38:51.654 fused_ordering(703) 00:38:51.654 fused_ordering(704) 00:38:51.654 fused_ordering(705) 00:38:51.654 fused_ordering(706) 00:38:51.654 fused_ordering(707) 00:38:51.654 fused_ordering(708) 00:38:51.654 fused_ordering(709) 00:38:51.654 fused_ordering(710) 00:38:51.654 fused_ordering(711) 00:38:51.654 fused_ordering(712) 00:38:51.654 fused_ordering(713) 00:38:51.654 fused_ordering(714) 00:38:51.654 fused_ordering(715) 00:38:51.654 fused_ordering(716) 00:38:51.654 fused_ordering(717) 00:38:51.654 fused_ordering(718) 00:38:51.654 fused_ordering(719) 00:38:51.654 fused_ordering(720) 00:38:51.654 fused_ordering(721) 00:38:51.654 fused_ordering(722) 00:38:51.654 fused_ordering(723) 00:38:51.654 fused_ordering(724) 00:38:51.654 fused_ordering(725) 00:38:51.654 fused_ordering(726) 00:38:51.654 fused_ordering(727) 00:38:51.654 fused_ordering(728) 00:38:51.654 fused_ordering(729) 00:38:51.654 fused_ordering(730) 00:38:51.654 fused_ordering(731) 00:38:51.654 fused_ordering(732) 00:38:51.654 fused_ordering(733) 00:38:51.655 fused_ordering(734) 00:38:51.655 fused_ordering(735) 00:38:51.655 fused_ordering(736) 00:38:51.655 fused_ordering(737) 00:38:51.655 fused_ordering(738) 00:38:51.655 fused_ordering(739) 00:38:51.655 fused_ordering(740) 00:38:51.655 fused_ordering(741) 00:38:51.655 fused_ordering(742) 00:38:51.655 fused_ordering(743) 00:38:51.655 fused_ordering(744) 00:38:51.655 fused_ordering(745) 00:38:51.655 fused_ordering(746) 00:38:51.655 fused_ordering(747) 00:38:51.655 fused_ordering(748) 00:38:51.655 fused_ordering(749) 00:38:51.655 fused_ordering(750) 00:38:51.655 fused_ordering(751) 00:38:51.655 fused_ordering(752) 00:38:51.655 fused_ordering(753) 00:38:51.655 fused_ordering(754) 00:38:51.655 fused_ordering(755) 00:38:51.655 fused_ordering(756) 00:38:51.655 fused_ordering(757) 00:38:51.655 fused_ordering(758) 00:38:51.655 fused_ordering(759) 00:38:51.655 fused_ordering(760) 00:38:51.655 fused_ordering(761) 00:38:51.655 fused_ordering(762) 00:38:51.655 fused_ordering(763) 00:38:51.655 fused_ordering(764) 00:38:51.655 fused_ordering(765) 00:38:51.655 fused_ordering(766) 00:38:51.655 fused_ordering(767) 00:38:51.655 fused_ordering(768) 00:38:51.655 fused_ordering(769) 00:38:51.655 fused_ordering(770) 00:38:51.655 fused_ordering(771) 00:38:51.655 fused_ordering(772) 00:38:51.655 fused_ordering(773) 00:38:51.655 fused_ordering(774) 00:38:51.655 fused_ordering(775) 00:38:51.655 fused_ordering(776) 00:38:51.655 fused_ordering(777) 00:38:51.655 fused_ordering(778) 00:38:51.655 fused_ordering(779) 00:38:51.655 fused_ordering(780) 00:38:51.655 fused_ordering(781) 00:38:51.655 fused_ordering(782) 00:38:51.655 fused_ordering(783) 00:38:51.655 fused_ordering(784) 00:38:51.655 fused_ordering(785) 00:38:51.655 fused_ordering(786) 00:38:51.655 fused_ordering(787) 00:38:51.655 fused_ordering(788) 00:38:51.655 fused_ordering(789) 00:38:51.655 fused_ordering(790) 00:38:51.655 fused_ordering(791) 00:38:51.655 fused_ordering(792) 00:38:51.655 fused_ordering(793) 00:38:51.655 fused_ordering(794) 00:38:51.655 fused_ordering(795) 00:38:51.655 fused_ordering(796) 00:38:51.655 fused_ordering(797) 00:38:51.655 fused_ordering(798) 00:38:51.655 fused_ordering(799) 00:38:51.655 fused_ordering(800) 00:38:51.655 fused_ordering(801) 00:38:51.655 fused_ordering(802) 00:38:51.655 fused_ordering(803) 00:38:51.655 fused_ordering(804) 00:38:51.655 fused_ordering(805) 00:38:51.655 fused_ordering(806) 00:38:51.655 fused_ordering(807) 00:38:51.655 fused_ordering(808) 00:38:51.655 fused_ordering(809) 00:38:51.655 fused_ordering(810) 00:38:51.655 fused_ordering(811) 00:38:51.655 fused_ordering(812) 00:38:51.655 fused_ordering(813) 00:38:51.655 fused_ordering(814) 00:38:51.655 fused_ordering(815) 00:38:51.655 fused_ordering(816) 00:38:51.655 fused_ordering(817) 00:38:51.655 fused_ordering(818) 00:38:51.655 fused_ordering(819) 00:38:51.655 fused_ordering(820) 00:38:51.915 fused_ordering(821) 00:38:51.915 fused_ordering(822) 00:38:51.915 fused_ordering(823) 00:38:51.915 fused_ordering(824) 00:38:51.915 fused_ordering(825) 00:38:51.915 fused_ordering(826) 00:38:51.915 fused_ordering(827) 00:38:51.915 fused_ordering(828) 00:38:51.915 fused_ordering(829) 00:38:51.915 fused_ordering(830) 00:38:51.915 fused_ordering(831) 00:38:51.915 fused_ordering(832) 00:38:51.915 fused_ordering(833) 00:38:51.915 fused_ordering(834) 00:38:51.915 fused_ordering(835) 00:38:51.915 fused_ordering(836) 00:38:51.915 fused_ordering(837) 00:38:51.915 fused_ordering(838) 00:38:51.915 fused_ordering(839) 00:38:51.915 fused_ordering(840) 00:38:51.915 fused_ordering(841) 00:38:51.915 fused_ordering(842) 00:38:51.915 fused_ordering(843) 00:38:51.915 fused_ordering(844) 00:38:51.915 fused_ordering(845) 00:38:51.915 fused_ordering(846) 00:38:51.915 fused_ordering(847) 00:38:51.915 fused_ordering(848) 00:38:51.915 fused_ordering(849) 00:38:51.915 fused_ordering(850) 00:38:51.915 fused_ordering(851) 00:38:51.915 fused_ordering(852) 00:38:51.915 fused_ordering(853) 00:38:51.915 fused_ordering(854) 00:38:51.915 fused_ordering(855) 00:38:51.915 fused_ordering(856) 00:38:51.915 fused_ordering(857) 00:38:51.915 fused_ordering(858) 00:38:51.915 fused_ordering(859) 00:38:51.915 fused_ordering(860) 00:38:51.915 fused_ordering(861) 00:38:51.915 fused_ordering(862) 00:38:51.915 fused_ordering(863) 00:38:51.915 fused_ordering(864) 00:38:51.915 fused_ordering(865) 00:38:51.915 fused_ordering(866) 00:38:51.915 fused_ordering(867) 00:38:51.915 fused_ordering(868) 00:38:51.915 fused_ordering(869) 00:38:51.915 fused_ordering(870) 00:38:51.915 fused_ordering(871) 00:38:51.915 fused_ordering(872) 00:38:51.915 fused_ordering(873) 00:38:51.915 fused_ordering(874) 00:38:51.915 fused_ordering(875) 00:38:51.915 fused_ordering(876) 00:38:51.915 fused_ordering(877) 00:38:51.915 fused_ordering(878) 00:38:51.915 fused_ordering(879) 00:38:51.915 fused_ordering(880) 00:38:51.915 fused_ordering(881) 00:38:51.915 fused_ordering(882) 00:38:51.915 fused_ordering(883) 00:38:51.915 fused_ordering(884) 00:38:51.915 fused_ordering(885) 00:38:51.915 fused_ordering(886) 00:38:51.915 fused_ordering(887) 00:38:51.915 fused_ordering(888) 00:38:51.915 fused_ordering(889) 00:38:51.915 fused_ordering(890) 00:38:51.915 fused_ordering(891) 00:38:51.915 fused_ordering(892) 00:38:51.915 fused_ordering(893) 00:38:51.915 fused_ordering(894) 00:38:51.915 fused_ordering(895) 00:38:51.915 fused_ordering(896) 00:38:51.915 fused_ordering(897) 00:38:51.915 fused_ordering(898) 00:38:51.915 fused_ordering(899) 00:38:51.915 fused_ordering(900) 00:38:51.915 fused_ordering(901) 00:38:51.915 fused_ordering(902) 00:38:51.915 fused_ordering(903) 00:38:51.915 fused_ordering(904) 00:38:51.915 fused_ordering(905) 00:38:51.915 fused_ordering(906) 00:38:51.915 fused_ordering(907) 00:38:51.915 fused_ordering(908) 00:38:51.915 fused_ordering(909) 00:38:51.915 fused_ordering(910) 00:38:51.915 fused_ordering(911) 00:38:51.915 fused_ordering(912) 00:38:51.915 fused_ordering(913) 00:38:51.915 fused_ordering(914) 00:38:51.915 fused_ordering(915) 00:38:51.915 fused_ordering(916) 00:38:51.915 fused_ordering(917) 00:38:51.915 fused_ordering(918) 00:38:51.915 fused_ordering(919) 00:38:51.915 fused_ordering(920) 00:38:51.915 fused_ordering(921) 00:38:51.915 fused_ordering(922) 00:38:51.915 fused_ordering(923) 00:38:51.915 fused_ordering(924) 00:38:51.915 fused_ordering(925) 00:38:51.915 fused_ordering(926) 00:38:51.915 fused_ordering(927) 00:38:51.915 fused_ordering(928) 00:38:51.915 fused_ordering(929) 00:38:51.915 fused_ordering(930) 00:38:51.915 fused_ordering(931) 00:38:51.915 fused_ordering(932) 00:38:51.915 fused_ordering(933) 00:38:51.915 fused_ordering(934) 00:38:51.915 fused_ordering(935) 00:38:51.915 fused_ordering(936) 00:38:51.915 fused_ordering(937) 00:38:51.915 fused_ordering(938) 00:38:51.915 fused_ordering(939) 00:38:51.915 fused_ordering(940) 00:38:51.915 fused_ordering(941) 00:38:51.915 fused_ordering(942) 00:38:51.915 fused_ordering(943) 00:38:51.915 fused_ordering(944) 00:38:51.915 fused_ordering(945) 00:38:51.915 fused_ordering(946) 00:38:51.915 fused_ordering(947) 00:38:51.915 fused_ordering(948) 00:38:51.915 fused_ordering(949) 00:38:51.915 fused_ordering(950) 00:38:51.915 fused_ordering(951) 00:38:51.915 fused_ordering(952) 00:38:51.915 fused_ordering(953) 00:38:51.915 fused_ordering(954) 00:38:51.915 fused_ordering(955) 00:38:51.915 fused_ordering(956) 00:38:51.915 fused_ordering(957) 00:38:51.915 fused_ordering(958) 00:38:51.915 fused_ordering(959) 00:38:51.915 fused_ordering(960) 00:38:51.915 fused_ordering(961) 00:38:51.915 fused_ordering(962) 00:38:51.915 fused_ordering(963) 00:38:51.915 fused_ordering(964) 00:38:51.915 fused_ordering(965) 00:38:51.915 fused_ordering(966) 00:38:51.915 fused_ordering(967) 00:38:51.915 fused_ordering(968) 00:38:51.915 fused_ordering(969) 00:38:51.915 fused_ordering(970) 00:38:51.915 fused_ordering(971) 00:38:51.915 fused_ordering(972) 00:38:51.915 fused_ordering(973) 00:38:51.915 fused_ordering(974) 00:38:51.915 fused_ordering(975) 00:38:51.915 fused_ordering(976) 00:38:51.915 fused_ordering(977) 00:38:51.915 fused_ordering(978) 00:38:51.915 fused_ordering(979) 00:38:51.915 fused_ordering(980) 00:38:51.915 fused_ordering(981) 00:38:51.915 fused_ordering(982) 00:38:51.915 fused_ordering(983) 00:38:51.915 fused_ordering(984) 00:38:51.915 fused_ordering(985) 00:38:51.915 fused_ordering(986) 00:38:51.915 fused_ordering(987) 00:38:51.915 fused_ordering(988) 00:38:51.915 fused_ordering(989) 00:38:51.915 fused_ordering(990) 00:38:51.915 fused_ordering(991) 00:38:51.915 fused_ordering(992) 00:38:51.915 fused_ordering(993) 00:38:51.915 fused_ordering(994) 00:38:51.915 fused_ordering(995) 00:38:51.915 fused_ordering(996) 00:38:51.915 fused_ordering(997) 00:38:51.915 fused_ordering(998) 00:38:51.915 fused_ordering(999) 00:38:51.915 fused_ordering(1000) 00:38:51.915 fused_ordering(1001) 00:38:51.915 fused_ordering(1002) 00:38:51.915 fused_ordering(1003) 00:38:51.915 fused_ordering(1004) 00:38:51.915 fused_ordering(1005) 00:38:51.915 fused_ordering(1006) 00:38:51.915 fused_ordering(1007) 00:38:51.915 fused_ordering(1008) 00:38:51.915 fused_ordering(1009) 00:38:51.915 fused_ordering(1010) 00:38:51.915 fused_ordering(1011) 00:38:51.915 fused_ordering(1012) 00:38:51.915 fused_ordering(1013) 00:38:51.915 fused_ordering(1014) 00:38:51.915 fused_ordering(1015) 00:38:51.915 fused_ordering(1016) 00:38:51.915 fused_ordering(1017) 00:38:51.915 fused_ordering(1018) 00:38:51.915 fused_ordering(1019) 00:38:51.915 fused_ordering(1020) 00:38:51.915 fused_ordering(1021) 00:38:51.915 fused_ordering(1022) 00:38:51.915 fused_ordering(1023) 00:38:51.915 14:52:11 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:38:51.915 14:52:11 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:38:51.915 14:52:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:51.915 14:52:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:38:52.175 14:52:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:52.175 14:52:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:38:52.175 14:52:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:52.176 14:52:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:52.176 rmmod nvme_tcp 00:38:52.176 rmmod nvme_fabrics 00:38:52.176 rmmod nvme_keyring 00:38:52.176 14:52:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:52.176 14:52:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:38:52.176 14:52:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:38:52.176 14:52:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 86902 ']' 00:38:52.176 14:52:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 86902 00:38:52.176 14:52:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # '[' -z 86902 ']' 00:38:52.176 14:52:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # kill -0 86902 00:38:52.176 14:52:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # uname 00:38:52.176 14:52:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:38:52.176 14:52:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86902 00:38:52.176 14:52:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:38:52.176 14:52:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:38:52.176 14:52:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86902' 00:38:52.176 killing process with pid 86902 00:38:52.176 14:52:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # kill 86902 00:38:52.176 14:52:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # wait 86902 00:38:52.435 14:52:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:38:52.435 14:52:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:52.435 14:52:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:52.435 14:52:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:52.435 14:52:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:52.435 14:52:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:52.435 14:52:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:52.435 14:52:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:52.435 14:52:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:38:52.435 00:38:52.435 real 0m3.283s 00:38:52.435 user 0m3.741s 00:38:52.435 sys 0m1.032s 00:38:52.435 14:52:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:52.435 14:52:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:38:52.435 ************************************ 00:38:52.435 END TEST nvmf_fused_ordering 00:38:52.435 ************************************ 00:38:52.435 14:52:11 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:38:52.435 14:52:11 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:38:52.435 14:52:11 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:52.435 14:52:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:52.435 ************************************ 00:38:52.435 START TEST nvmf_delete_subsystem 00:38:52.435 ************************************ 00:38:52.435 14:52:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:38:52.435 * Looking for test storage... 00:38:52.435 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:38:52.435 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:38:52.695 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:38:52.695 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:52.695 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:52.695 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:52.695 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:52.695 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:52.695 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:52.695 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:52.695 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:52.695 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:52.695 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:52.695 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:38:52.695 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:38:52.695 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:52.695 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:52.695 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:38:52.695 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:52.695 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:52.695 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:52.695 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:52.695 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:52.695 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:52.695 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:52.695 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:52.695 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:38:52.695 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:52.695 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:38:52.695 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:52.695 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:52.695 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:52.695 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:52.695 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:52.695 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:52.695 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:52.695 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:52.695 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:38:52.695 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:52.695 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:52.695 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:52.695 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:52.695 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:52.696 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:52.696 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:52.696 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:52.696 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:38:52.696 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:38:52.696 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:38:52.696 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:38:52.696 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:38:52.696 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:38:52.696 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:52.696 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:52.696 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:38:52.696 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:38:52.696 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:38:52.696 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:38:52.696 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:38:52.696 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:52.696 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:38:52.696 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:38:52.696 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:38:52.696 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:38:52.696 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:38:52.696 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:38:52.696 Cannot find device "nvmf_tgt_br" 00:38:52.696 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # true 00:38:52.696 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:38:52.696 Cannot find device "nvmf_tgt_br2" 00:38:52.696 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # true 00:38:52.696 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:38:52.696 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:38:52.696 Cannot find device "nvmf_tgt_br" 00:38:52.696 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # true 00:38:52.696 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:38:52.696 Cannot find device "nvmf_tgt_br2" 00:38:52.696 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # true 00:38:52.696 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:38:52.696 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:38:52.696 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:38:52.696 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:52.696 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:38:52.696 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:38:52.696 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:52.696 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:38:52.696 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:38:52.696 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:38:52.696 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:38:52.696 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:38:52.696 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:38:52.696 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:38:52.955 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:38:52.955 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:38:52.955 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:38:52.955 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:38:52.955 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:38:52.955 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:38:52.955 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:38:52.955 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:38:52.955 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:38:52.955 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:38:52.955 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:38:52.955 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:38:52.955 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:38:52.955 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:38:52.955 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:38:52.955 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:38:52.955 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:38:52.955 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:38:52.955 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:52.955 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:38:52.955 00:38:52.955 --- 10.0.0.2 ping statistics --- 00:38:52.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:52.955 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:38:52.955 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:38:52.955 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:38:52.955 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:38:52.955 00:38:52.955 --- 10.0.0.3 ping statistics --- 00:38:52.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:52.955 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:38:52.955 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:38:52.955 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:52.955 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:38:52.955 00:38:52.955 --- 10.0.0.1 ping statistics --- 00:38:52.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:52.955 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:38:52.955 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:52.955 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@433 -- # return 0 00:38:52.955 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:38:52.955 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:52.955 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:52.955 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:52.955 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:52.955 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:52.955 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:52.955 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:38:52.955 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:52.955 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@720 -- # xtrace_disable 00:38:52.955 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:52.955 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=87146 00:38:52.955 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:38:52.955 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 87146 00:38:52.956 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # '[' -z 87146 ']' 00:38:52.956 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:52.956 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local max_retries=100 00:38:52.956 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:52.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:52.956 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # xtrace_disable 00:38:52.956 14:52:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:52.956 [2024-07-22 14:52:12.506165] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:52.956 [2024-07-22 14:52:12.506247] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:53.215 [2024-07-22 14:52:12.643948] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:53.215 [2024-07-22 14:52:12.688680] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:53.215 [2024-07-22 14:52:12.688735] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:53.215 [2024-07-22 14:52:12.688741] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:53.215 [2024-07-22 14:52:12.688745] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:53.215 [2024-07-22 14:52:12.688750] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:53.215 [2024-07-22 14:52:12.688954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:53.215 [2024-07-22 14:52:12.688956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:38:53.783 14:52:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:38:53.783 14:52:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # return 0 00:38:53.783 14:52:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:53.783 14:52:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:53.783 14:52:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:53.783 14:52:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:53.783 14:52:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:53.783 14:52:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:53.783 14:52:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:53.783 [2024-07-22 14:52:13.390442] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:53.783 14:52:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:53.783 14:52:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:53.783 14:52:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:53.783 14:52:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:53.783 14:52:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:53.783 14:52:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:53.783 14:52:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:53.783 14:52:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:54.047 [2024-07-22 14:52:13.414505] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:54.047 14:52:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:54.047 14:52:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:38:54.047 14:52:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:54.047 14:52:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:54.047 NULL1 00:38:54.047 14:52:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:54.047 14:52:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:54.047 14:52:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:54.047 14:52:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:54.047 Delay0 00:38:54.047 14:52:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:54.047 14:52:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:54.047 14:52:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:54.047 14:52:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:54.047 14:52:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:54.047 14:52:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=87196 00:38:54.047 14:52:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:38:54.047 14:52:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:38:54.047 [2024-07-22 14:52:13.630598] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:38:55.962 14:52:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:55.962 14:52:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:55.962 14:52:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 starting I/O failed: -6 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Write completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 starting I/O failed: -6 00:38:56.221 Write completed with error (sct=0, sc=8) 00:38:56.221 Write completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 starting I/O failed: -6 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 starting I/O failed: -6 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Write completed with error (sct=0, sc=8) 00:38:56.221 starting I/O failed: -6 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Write completed with error (sct=0, sc=8) 00:38:56.221 starting I/O failed: -6 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 starting I/O failed: -6 00:38:56.221 Write completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 starting I/O failed: -6 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Write completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 starting I/O failed: -6 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Write completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Write completed with error (sct=0, sc=8) 00:38:56.221 starting I/O failed: -6 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Write completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Write completed with error (sct=0, sc=8) 00:38:56.221 starting I/O failed: -6 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 [2024-07-22 14:52:15.656422] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbd180 is same with the state(5) to be set 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Write completed with error (sct=0, sc=8) 00:38:56.221 Write completed with error (sct=0, sc=8) 00:38:56.221 Write completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Write completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Write completed with error (sct=0, sc=8) 00:38:56.221 Write completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Write completed with error (sct=0, sc=8) 00:38:56.221 Write completed with error (sct=0, sc=8) 00:38:56.221 Write completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Write completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Write completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Write completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Write completed with error (sct=0, sc=8) 00:38:56.221 Write completed with error (sct=0, sc=8) 00:38:56.221 Write completed with error (sct=0, sc=8) 00:38:56.221 Write completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Write completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Write completed with error (sct=0, sc=8) 00:38:56.221 Write completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.221 Read completed with error (sct=0, sc=8) 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 Write completed with error (sct=0, sc=8) 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 [2024-07-22 14:52:15.657510] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc09b0 is same with the state(5) to be set 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 starting I/O failed: -6 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 Write completed with error (sct=0, sc=8) 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 starting I/O failed: -6 00:38:56.222 Write completed with error (sct=0, sc=8) 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 Write completed with error (sct=0, sc=8) 00:38:56.222 starting I/O failed: -6 00:38:56.222 Write completed with error (sct=0, sc=8) 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 Write completed with error (sct=0, sc=8) 00:38:56.222 starting I/O failed: -6 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 Write completed with error (sct=0, sc=8) 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 Write completed with error (sct=0, sc=8) 00:38:56.222 starting I/O failed: -6 00:38:56.222 Write completed with error (sct=0, sc=8) 00:38:56.222 Write completed with error (sct=0, sc=8) 00:38:56.222 Write completed with error (sct=0, sc=8) 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 starting I/O failed: -6 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 Write completed with error (sct=0, sc=8) 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 starting I/O failed: -6 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 Write completed with error (sct=0, sc=8) 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 starting I/O failed: -6 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 Write completed with error (sct=0, sc=8) 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 starting I/O failed: -6 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 starting I/O failed: -6 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 starting I/O failed: -6 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 starting I/O failed: -6 00:38:56.222 Write completed with error (sct=0, sc=8) 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 starting I/O failed: -6 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 starting I/O failed: -6 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 starting I/O failed: -6 00:38:56.222 Write completed with error (sct=0, sc=8) 00:38:56.222 Write completed with error (sct=0, sc=8) 00:38:56.222 starting I/O failed: -6 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 starting I/O failed: -6 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 Write completed with error (sct=0, sc=8) 00:38:56.222 starting I/O failed: -6 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 Write completed with error (sct=0, sc=8) 00:38:56.222 starting I/O failed: -6 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 Write completed with error (sct=0, sc=8) 00:38:56.222 starting I/O failed: -6 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 Write completed with error (sct=0, sc=8) 00:38:56.222 starting I/O failed: -6 00:38:56.222 Write completed with error (sct=0, sc=8) 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 starting I/O failed: -6 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 starting I/O failed: -6 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 starting I/O failed: -6 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 Write completed with error (sct=0, sc=8) 00:38:56.222 starting I/O failed: -6 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 starting I/O failed: -6 00:38:56.222 Write completed with error (sct=0, sc=8) 00:38:56.222 Write completed with error (sct=0, sc=8) 00:38:56.222 starting I/O failed: -6 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 starting I/O failed: -6 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 starting I/O failed: -6 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 starting I/O failed: -6 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 Write completed with error (sct=0, sc=8) 00:38:56.222 starting I/O failed: -6 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 starting I/O failed: -6 00:38:56.222 Write completed with error (sct=0, sc=8) 00:38:56.222 Read completed with error (sct=0, sc=8) 00:38:56.222 starting I/O failed: -6 00:38:56.222 Write completed with error (sct=0, sc=8) 00:38:56.222 Write completed with error (sct=0, sc=8) 00:38:56.222 starting I/O failed: -6 00:38:56.222 Write completed with error (sct=0, sc=8) 00:38:56.222 starting I/O failed: -6 00:38:56.222 starting I/O failed: -6 00:38:56.222 starting I/O failed: -6 00:38:56.222 starting I/O failed: -6 00:38:56.222 starting I/O failed: -6 00:38:56.222 starting I/O failed: -6 00:38:56.222 starting I/O failed: -6 00:38:56.222 starting I/O failed: -6 00:38:56.222 starting I/O failed: -6 00:38:56.222 starting I/O failed: -6 00:38:56.222 starting I/O failed: -6 00:38:56.222 starting I/O failed: -6 00:38:56.222 starting I/O failed: -6 00:38:57.159 [2024-07-22 14:52:16.643711] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc2be0 is same with the state(5) to be set 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Write completed with error (sct=0, sc=8) 00:38:57.159 Write completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Write completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Write completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 [2024-07-22 14:52:16.655321] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc06a0 is same with the state(5) to be set 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Write completed with error (sct=0, sc=8) 00:38:57.159 Write completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Write completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Write completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Write completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Write completed with error (sct=0, sc=8) 00:38:57.159 [2024-07-22 14:52:16.655640] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbd360 is same with the state(5) to be set 00:38:57.159 Write completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Write completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Write completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Write completed with error (sct=0, sc=8) 00:38:57.159 Write completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Write completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Write completed with error (sct=0, sc=8) 00:38:57.159 Write completed with error (sct=0, sc=8) 00:38:57.159 Write completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Write completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Write completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Write completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Write completed with error (sct=0, sc=8) 00:38:57.159 Write completed with error (sct=0, sc=8) 00:38:57.159 Write completed with error (sct=0, sc=8) 00:38:57.159 Write completed with error (sct=0, sc=8) 00:38:57.159 Write completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 [2024-07-22 14:52:16.658256] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd79400c780 is same with the state(5) to be set 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Write completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Write completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Write completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Write completed with error (sct=0, sc=8) 00:38:57.159 Write completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Write completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Write completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Write completed with error (sct=0, sc=8) 00:38:57.159 Write completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Write completed with error (sct=0, sc=8) 00:38:57.159 Write completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.159 Read completed with error (sct=0, sc=8) 00:38:57.160 [2024-07-22 14:52:16.658490] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd79400bfe0 is same with the state(5) to be set 00:38:57.160 Initializing NVMe Controllers 00:38:57.160 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:57.160 Controller IO queue size 128, less than required. 00:38:57.160 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:57.160 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:38:57.160 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:38:57.160 Initialization complete. Launching workers. 00:38:57.160 ======================================================== 00:38:57.160 Latency(us) 00:38:57.160 Device Information : IOPS MiB/s Average min max 00:38:57.160 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 171.31 0.08 889883.76 635.62 1006792.37 00:38:57.160 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 173.80 0.08 969594.86 298.64 2000954.92 00:38:57.160 ======================================================== 00:38:57.160 Total : 345.11 0.17 930026.87 298.64 2000954.92 00:38:57.160 00:38:57.160 [2024-07-22 14:52:16.659301] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc2be0 (9): Bad file descriptor 00:38:57.160 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:38:57.160 14:52:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:57.160 14:52:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:38:57.160 14:52:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 87196 00:38:57.160 14:52:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:38:57.727 14:52:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:38:57.727 14:52:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 87196 00:38:57.727 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (87196) - No such process 00:38:57.727 14:52:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 87196 00:38:57.727 14:52:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:38:57.727 14:52:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 87196 00:38:57.727 14:52:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:38:57.727 14:52:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:57.727 14:52:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:38:57.727 14:52:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:57.727 14:52:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 87196 00:38:57.727 14:52:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:38:57.727 14:52:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:57.727 14:52:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:38:57.727 14:52:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:57.727 14:52:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:57.727 14:52:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:57.727 14:52:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:57.727 14:52:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:57.727 14:52:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:57.727 14:52:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:57.727 14:52:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:57.727 [2024-07-22 14:52:17.192618] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:57.727 14:52:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:57.727 14:52:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:57.727 14:52:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:57.727 14:52:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:57.727 14:52:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:57.727 14:52:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=87237 00:38:57.727 14:52:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:38:57.727 14:52:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:38:57.727 14:52:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 87237 00:38:57.727 14:52:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:57.986 [2024-07-22 14:52:17.385963] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:38:58.245 14:52:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:58.245 14:52:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 87237 00:38:58.245 14:52:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:58.814 14:52:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:58.814 14:52:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 87237 00:38:58.814 14:52:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:59.382 14:52:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:59.382 14:52:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 87237 00:38:59.382 14:52:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:59.641 14:52:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:59.641 14:52:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 87237 00:38:59.641 14:52:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:00.209 14:52:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:00.209 14:52:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 87237 00:39:00.209 14:52:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:00.777 14:52:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:00.777 14:52:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 87237 00:39:00.777 14:52:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:01.057 Initializing NVMe Controllers 00:39:01.057 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:01.057 Controller IO queue size 128, less than required. 00:39:01.057 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:01.057 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:39:01.057 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:39:01.057 Initialization complete. Launching workers. 00:39:01.057 ======================================================== 00:39:01.057 Latency(us) 00:39:01.057 Device Information : IOPS MiB/s Average min max 00:39:01.057 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002608.83 1000129.68 1041398.22 00:39:01.057 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003796.12 1000133.58 1010636.88 00:39:01.058 ======================================================== 00:39:01.058 Total : 256.00 0.12 1003202.48 1000129.68 1041398.22 00:39:01.058 00:39:01.317 14:52:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:01.317 14:52:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 87237 00:39:01.317 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (87237) - No such process 00:39:01.317 14:52:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 87237 00:39:01.317 14:52:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:39:01.317 14:52:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:39:01.317 14:52:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:39:01.317 14:52:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:39:01.317 14:52:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:39:01.317 14:52:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:39:01.317 14:52:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:39:01.317 14:52:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:39:01.317 rmmod nvme_tcp 00:39:01.317 rmmod nvme_fabrics 00:39:01.317 rmmod nvme_keyring 00:39:01.317 14:52:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:39:01.317 14:52:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:39:01.317 14:52:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:39:01.317 14:52:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 87146 ']' 00:39:01.317 14:52:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 87146 00:39:01.317 14:52:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # '[' -z 87146 ']' 00:39:01.317 14:52:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # kill -0 87146 00:39:01.317 14:52:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # uname 00:39:01.317 14:52:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:39:01.318 14:52:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87146 00:39:01.318 14:52:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:39:01.318 14:52:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:39:01.318 killing process with pid 87146 00:39:01.318 14:52:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87146' 00:39:01.318 14:52:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # kill 87146 00:39:01.318 14:52:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # wait 87146 00:39:01.578 14:52:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:39:01.578 14:52:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:39:01.578 14:52:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:39:01.578 14:52:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:01.578 14:52:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:39:01.578 14:52:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:01.578 14:52:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:01.578 14:52:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:01.578 14:52:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:39:01.578 00:39:01.578 real 0m9.226s 00:39:01.578 user 0m28.997s 00:39:01.578 sys 0m1.073s 00:39:01.578 14:52:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:01.578 14:52:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:01.578 ************************************ 00:39:01.578 END TEST nvmf_delete_subsystem 00:39:01.578 ************************************ 00:39:01.837 14:52:21 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:39:01.837 14:52:21 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:39:01.837 14:52:21 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:01.837 14:52:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:01.837 ************************************ 00:39:01.837 START TEST nvmf_ns_masking 00:39:01.837 ************************************ 00:39:01.837 14:52:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:39:01.837 * Looking for test storage... 00:39:01.837 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:39:01.837 14:52:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:39:01.837 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:39:01.837 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:01.837 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:01.837 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:01.837 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:01.837 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:01.837 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:01.837 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:01.837 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:01.837 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:01.837 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:01.837 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:39:01.837 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:39:01.837 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:01.837 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:01.837 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:39:01.837 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:01.837 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:01.837 14:52:21 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:01.837 14:52:21 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:01.837 14:52:21 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:01.837 14:52:21 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:01.837 14:52:21 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:01.838 14:52:21 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:01.838 14:52:21 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:39:01.838 14:52:21 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:01.838 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:39:01.838 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:01.838 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:01.838 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:01.838 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:01.838 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:01.838 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:01.838 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:01.838 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:01.838 14:52:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:01.838 14:52:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:39:01.838 14:52:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:39:01.838 14:52:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:39:01.838 14:52:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:39:01.838 14:52:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=dcd91bf1-1cb2-460f-b908-fd639f3d2e29 00:39:01.838 14:52:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:39:01.838 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:39:01.838 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:01.838 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:39:01.838 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:39:01.838 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:39:01.838 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:01.838 14:52:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:01.838 14:52:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:01.838 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:39:01.838 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:39:01.838 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:39:01.838 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:39:01.838 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:39:01.838 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@432 -- # nvmf_veth_init 00:39:01.838 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:01.838 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:01.838 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:39:01.838 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:39:01.838 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:39:01.838 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:39:01.838 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:39:01.838 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:01.838 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:39:01.838 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:39:01.838 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:39:01.838 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:39:01.838 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:39:01.838 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:39:01.838 Cannot find device "nvmf_tgt_br" 00:39:01.838 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # true 00:39:01.838 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:39:01.838 Cannot find device "nvmf_tgt_br2" 00:39:01.838 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # true 00:39:01.838 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:39:01.838 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:39:01.838 Cannot find device "nvmf_tgt_br" 00:39:01.838 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # true 00:39:01.838 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:39:02.097 Cannot find device "nvmf_tgt_br2" 00:39:02.097 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # true 00:39:02.097 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:39:02.097 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:39:02.097 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:39:02.097 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:02.097 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:39:02.097 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:39:02.097 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:02.097 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:39:02.097 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:39:02.097 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:39:02.097 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:39:02.097 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:39:02.097 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:39:02.097 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:39:02.097 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:39:02.097 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:39:02.097 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:39:02.097 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:39:02.097 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:39:02.097 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:39:02.097 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:39:02.097 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:39:02.097 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:39:02.097 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:39:02.097 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:39:02.097 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:39:02.097 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:39:02.097 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:39:02.097 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:39:02.097 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:39:02.097 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:39:02.097 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:39:02.097 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:02.098 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:39:02.098 00:39:02.098 --- 10.0.0.2 ping statistics --- 00:39:02.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:02.098 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:39:02.098 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:39:02.098 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:39:02.098 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.096 ms 00:39:02.098 00:39:02.098 --- 10.0.0.3 ping statistics --- 00:39:02.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:02.098 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:39:02.098 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:39:02.098 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:02.098 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:39:02.098 00:39:02.098 --- 10.0.0.1 ping statistics --- 00:39:02.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:02.098 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:39:02.098 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:02.098 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@433 -- # return 0 00:39:02.098 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:39:02.098 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:02.098 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:39:02.098 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:39:02.356 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:02.356 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:39:02.356 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:39:02.356 14:52:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:39:02.356 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:39:02.356 14:52:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@720 -- # xtrace_disable 00:39:02.356 14:52:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:39:02.356 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=87477 00:39:02.356 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:39:02.356 14:52:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 87477 00:39:02.357 14:52:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # '[' -z 87477 ']' 00:39:02.357 14:52:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:02.357 14:52:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local max_retries=100 00:39:02.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:02.357 14:52:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:02.357 14:52:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # xtrace_disable 00:39:02.357 14:52:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:39:02.357 [2024-07-22 14:52:21.809756] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:39:02.357 [2024-07-22 14:52:21.809828] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:02.357 [2024-07-22 14:52:21.950024] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:02.615 [2024-07-22 14:52:21.998524] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:02.615 [2024-07-22 14:52:21.998584] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:02.615 [2024-07-22 14:52:21.998591] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:02.615 [2024-07-22 14:52:21.998596] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:02.615 [2024-07-22 14:52:21.998600] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:02.615 [2024-07-22 14:52:21.999622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:39:02.615 [2024-07-22 14:52:21.999882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:02.615 [2024-07-22 14:52:21.999710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:39:02.615 [2024-07-22 14:52:21.999888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:39:03.185 14:52:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:39:03.185 14:52:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@860 -- # return 0 00:39:03.185 14:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:39:03.185 14:52:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:03.185 14:52:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:39:03.185 14:52:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:03.185 14:52:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:03.444 [2024-07-22 14:52:22.937812] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:03.444 14:52:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:39:03.444 14:52:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:39:03.444 14:52:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:39:03.702 Malloc1 00:39:03.702 14:52:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:39:03.961 Malloc2 00:39:03.961 14:52:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:03.961 14:52:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:39:04.220 14:52:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:04.482 [2024-07-22 14:52:23.918769] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:04.482 14:52:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:39:04.482 14:52:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I dcd91bf1-1cb2-460f-b908-fd639f3d2e29 -a 10.0.0.2 -s 4420 -i 4 00:39:04.482 14:52:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:39:04.482 14:52:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:39:04.482 14:52:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:39:04.482 14:52:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:39:04.482 14:52:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:39:07.019 14:52:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:39:07.019 14:52:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:39:07.019 14:52:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:39:07.019 14:52:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:39:07.019 14:52:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:39:07.019 14:52:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:39:07.019 14:52:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:39:07.019 14:52:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:39:07.019 14:52:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:39:07.019 14:52:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:39:07.019 14:52:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:39:07.019 14:52:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:39:07.019 14:52:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:39:07.019 [ 0]:0x1 00:39:07.019 14:52:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:39:07.019 14:52:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:39:07.019 14:52:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=6d33d87accc446c79dbcf9927027cc0b 00:39:07.019 14:52:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 6d33d87accc446c79dbcf9927027cc0b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:39:07.019 14:52:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:39:07.019 14:52:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:39:07.019 14:52:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:39:07.019 14:52:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:39:07.019 [ 0]:0x1 00:39:07.019 14:52:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:39:07.019 14:52:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:39:07.019 14:52:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=6d33d87accc446c79dbcf9927027cc0b 00:39:07.020 14:52:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 6d33d87accc446c79dbcf9927027cc0b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:39:07.020 14:52:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:39:07.020 14:52:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:39:07.020 14:52:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:39:07.020 [ 1]:0x2 00:39:07.020 14:52:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:39:07.020 14:52:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:39:07.020 14:52:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=ce8ac1839f324c3e8ec745c9083540e4 00:39:07.020 14:52:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ ce8ac1839f324c3e8ec745c9083540e4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:39:07.020 14:52:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:39:07.020 14:52:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:07.020 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:39:07.020 14:52:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:07.279 14:52:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:39:07.537 14:52:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:39:07.537 14:52:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I dcd91bf1-1cb2-460f-b908-fd639f3d2e29 -a 10.0.0.2 -s 4420 -i 4 00:39:07.537 14:52:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:39:07.537 14:52:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:39:07.537 14:52:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:39:07.537 14:52:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:39:07.537 14:52:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:39:07.537 14:52:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:39:09.527 14:52:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:39:09.527 14:52:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:39:09.527 14:52:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:39:09.527 14:52:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:39:09.527 14:52:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:39:09.527 14:52:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:39:09.527 14:52:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:39:09.527 14:52:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:39:09.527 14:52:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:39:09.527 14:52:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:39:09.527 14:52:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:39:09.527 14:52:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:39:09.527 14:52:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:39:09.527 14:52:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:39:09.527 14:52:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:09.527 14:52:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:39:09.527 14:52:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:09.527 14:52:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:39:09.527 14:52:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:39:09.527 14:52:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:39:09.527 14:52:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:39:09.527 14:52:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:39:09.786 14:52:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:39:09.786 14:52:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:39:09.786 14:52:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:39:09.786 14:52:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:09.786 14:52:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:09.786 14:52:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:09.786 14:52:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:39:09.786 14:52:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:39:09.786 14:52:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:39:09.786 [ 0]:0x2 00:39:09.786 14:52:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:39:09.786 14:52:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:39:09.786 14:52:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=ce8ac1839f324c3e8ec745c9083540e4 00:39:09.786 14:52:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ ce8ac1839f324c3e8ec745c9083540e4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:39:09.786 14:52:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:39:10.045 14:52:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:39:10.045 14:52:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:39:10.045 14:52:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:39:10.045 [ 0]:0x1 00:39:10.045 14:52:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:39:10.045 14:52:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:39:10.045 14:52:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=6d33d87accc446c79dbcf9927027cc0b 00:39:10.045 14:52:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 6d33d87accc446c79dbcf9927027cc0b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:39:10.046 14:52:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:39:10.046 14:52:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:39:10.046 14:52:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:39:10.046 [ 1]:0x2 00:39:10.046 14:52:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:39:10.046 14:52:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:39:10.046 14:52:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=ce8ac1839f324c3e8ec745c9083540e4 00:39:10.046 14:52:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ ce8ac1839f324c3e8ec745c9083540e4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:39:10.046 14:52:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:39:10.305 14:52:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:39:10.305 14:52:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:39:10.305 14:52:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:39:10.305 14:52:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:39:10.305 14:52:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:10.305 14:52:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:39:10.305 14:52:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:10.305 14:52:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:39:10.305 14:52:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:39:10.305 14:52:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:39:10.305 14:52:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:39:10.305 14:52:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:39:10.305 14:52:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:39:10.305 14:52:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:39:10.305 14:52:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:39:10.305 14:52:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:10.305 14:52:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:10.305 14:52:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:10.305 14:52:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:39:10.305 14:52:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:39:10.305 14:52:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:39:10.305 [ 0]:0x2 00:39:10.305 14:52:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:39:10.305 14:52:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:39:10.305 14:52:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=ce8ac1839f324c3e8ec745c9083540e4 00:39:10.305 14:52:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ ce8ac1839f324c3e8ec745c9083540e4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:39:10.305 14:52:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:39:10.305 14:52:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:10.305 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:39:10.305 14:52:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:39:10.565 14:52:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:39:10.565 14:52:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I dcd91bf1-1cb2-460f-b908-fd639f3d2e29 -a 10.0.0.2 -s 4420 -i 4 00:39:10.823 14:52:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:39:10.823 14:52:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:39:10.823 14:52:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:39:10.823 14:52:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:39:10.823 14:52:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:39:10.823 14:52:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:39:12.729 14:52:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:39:12.729 14:52:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:39:12.729 14:52:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:39:12.729 14:52:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:39:12.729 14:52:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:39:12.729 14:52:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:39:12.729 14:52:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:39:12.729 14:52:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:39:12.729 14:52:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:39:12.729 14:52:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:39:12.729 14:52:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:39:12.729 14:52:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:39:12.729 14:52:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:39:12.729 [ 0]:0x1 00:39:12.729 14:52:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:39:12.729 14:52:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:39:12.989 14:52:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=6d33d87accc446c79dbcf9927027cc0b 00:39:12.989 14:52:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 6d33d87accc446c79dbcf9927027cc0b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:39:12.989 14:52:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:39:12.989 14:52:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:39:12.989 14:52:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:39:12.989 [ 1]:0x2 00:39:12.989 14:52:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:39:12.989 14:52:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:39:12.989 14:52:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=ce8ac1839f324c3e8ec745c9083540e4 00:39:12.989 14:52:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ ce8ac1839f324c3e8ec745c9083540e4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:39:12.989 14:52:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:39:13.248 14:52:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:39:13.248 14:52:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:39:13.248 14:52:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:39:13.248 14:52:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:39:13.248 14:52:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:13.248 14:52:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:39:13.248 14:52:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:13.248 14:52:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:39:13.248 14:52:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:39:13.248 14:52:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:39:13.248 14:52:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:39:13.248 14:52:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:39:13.248 14:52:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:39:13.248 14:52:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:39:13.248 14:52:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:39:13.248 14:52:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:13.248 14:52:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:13.248 14:52:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:13.248 14:52:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:39:13.248 14:52:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:39:13.248 14:52:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:39:13.248 [ 0]:0x2 00:39:13.248 14:52:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:39:13.248 14:52:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:39:13.248 14:52:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=ce8ac1839f324c3e8ec745c9083540e4 00:39:13.248 14:52:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ ce8ac1839f324c3e8ec745c9083540e4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:39:13.248 14:52:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:39:13.248 14:52:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:39:13.248 14:52:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:39:13.248 14:52:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:13.248 14:52:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:13.248 14:52:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:13.248 14:52:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:13.248 14:52:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:13.248 14:52:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:13.248 14:52:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:13.248 14:52:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:39:13.248 14:52:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:39:13.523 [2024-07-22 14:52:32.945086] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:39:13.523 2024/07/22 14:52:32 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:39:13.523 request: 00:39:13.523 { 00:39:13.523 "method": "nvmf_ns_remove_host", 00:39:13.523 "params": { 00:39:13.523 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:39:13.523 "nsid": 2, 00:39:13.523 "host": "nqn.2016-06.io.spdk:host1" 00:39:13.523 } 00:39:13.523 } 00:39:13.523 Got JSON-RPC error response 00:39:13.523 GoRPCClient: error on JSON-RPC call 00:39:13.523 14:52:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:39:13.523 14:52:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:13.523 14:52:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:13.523 14:52:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:13.523 14:52:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:39:13.523 14:52:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:39:13.523 14:52:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:39:13.523 14:52:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:39:13.523 14:52:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:13.523 14:52:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:39:13.523 14:52:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:13.523 14:52:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:39:13.523 14:52:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:39:13.523 14:52:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:39:13.523 14:52:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:39:13.523 14:52:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:39:13.523 14:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:39:13.523 14:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:39:13.523 14:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:39:13.523 14:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:13.523 14:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:13.523 14:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:13.523 14:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:39:13.523 14:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:39:13.523 14:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:39:13.523 [ 0]:0x2 00:39:13.523 14:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:39:13.523 14:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:39:13.523 14:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=ce8ac1839f324c3e8ec745c9083540e4 00:39:13.523 14:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ ce8ac1839f324c3e8ec745c9083540e4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:39:13.523 14:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:39:13.523 14:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:13.523 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:39:13.523 14:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:13.806 14:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:39:13.806 14:52:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:39:13.806 14:52:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:39:13.806 14:52:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:39:13.806 14:52:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:39:13.806 14:52:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:39:13.806 14:52:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:39:13.806 14:52:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:39:13.806 rmmod nvme_tcp 00:39:13.806 rmmod nvme_fabrics 00:39:13.806 rmmod nvme_keyring 00:39:13.806 14:52:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:39:13.806 14:52:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:39:13.806 14:52:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:39:13.806 14:52:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 87477 ']' 00:39:13.806 14:52:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 87477 00:39:13.806 14:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # '[' -z 87477 ']' 00:39:13.806 14:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@950 -- # kill -0 87477 00:39:13.806 14:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # uname 00:39:13.806 14:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:39:13.806 14:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87477 00:39:14.065 14:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:39:14.065 14:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:39:14.065 killing process with pid 87477 00:39:14.065 14:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87477' 00:39:14.065 14:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # kill 87477 00:39:14.065 14:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@970 -- # wait 87477 00:39:14.065 14:52:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:39:14.065 14:52:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:39:14.065 14:52:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:39:14.065 14:52:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:14.065 14:52:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:39:14.065 14:52:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:14.065 14:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:14.065 14:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:14.325 14:52:33 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:39:14.325 00:39:14.325 real 0m12.484s 00:39:14.325 user 0m49.596s 00:39:14.325 sys 0m1.904s 00:39:14.325 14:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:14.325 14:52:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:39:14.325 ************************************ 00:39:14.325 END TEST nvmf_ns_masking 00:39:14.325 ************************************ 00:39:14.325 14:52:33 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 0 -eq 1 ]] 00:39:14.325 14:52:33 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:39:14.325 14:52:33 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:39:14.325 14:52:33 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:39:14.325 14:52:33 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:14.325 14:52:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:14.325 ************************************ 00:39:14.325 START TEST nvmf_host_management 00:39:14.325 ************************************ 00:39:14.325 14:52:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:39:14.325 * Looking for test storage... 00:39:14.325 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:39:14.325 14:52:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:39:14.325 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:39:14.325 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:14.325 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:14.325 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:14.325 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:14.325 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:14.325 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:14.325 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:14.325 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:14.325 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:14.325 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:39:14.326 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:39:14.586 Cannot find device "nvmf_tgt_br" 00:39:14.586 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:39:14.586 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:39:14.586 Cannot find device "nvmf_tgt_br2" 00:39:14.586 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:39:14.586 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:39:14.586 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:39:14.586 Cannot find device "nvmf_tgt_br" 00:39:14.586 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:39:14.586 14:52:33 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:39:14.586 Cannot find device "nvmf_tgt_br2" 00:39:14.586 14:52:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:39:14.586 14:52:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:39:14.586 14:52:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:39:14.586 14:52:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:39:14.586 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:14.586 14:52:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:39:14.586 14:52:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:39:14.586 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:14.586 14:52:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:39:14.586 14:52:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:39:14.586 14:52:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:39:14.586 14:52:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:39:14.586 14:52:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:39:14.586 14:52:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:39:14.586 14:52:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:39:14.586 14:52:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:39:14.586 14:52:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:39:14.586 14:52:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:39:14.586 14:52:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:39:14.586 14:52:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:39:14.586 14:52:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:39:14.586 14:52:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:39:14.586 14:52:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:39:14.586 14:52:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:39:14.586 14:52:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:39:14.586 14:52:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:39:14.586 14:52:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:39:14.586 14:52:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:39:14.586 14:52:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:39:14.586 14:52:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:39:14.586 14:52:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:39:14.586 14:52:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:39:14.586 14:52:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:39:14.586 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:14.586 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:39:14.586 00:39:14.586 --- 10.0.0.2 ping statistics --- 00:39:14.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:14.586 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:39:14.586 14:52:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:39:14.586 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:39:14.586 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:39:14.586 00:39:14.586 --- 10.0.0.3 ping statistics --- 00:39:14.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:14.586 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:39:14.586 14:52:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:39:14.845 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:14.845 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:39:14.845 00:39:14.845 --- 10.0.0.1 ping statistics --- 00:39:14.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:14.845 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:39:14.845 14:52:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:14.845 14:52:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:39:14.845 14:52:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:39:14.845 14:52:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:14.845 14:52:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:39:14.845 14:52:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:39:14.845 14:52:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:14.845 14:52:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:39:14.845 14:52:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:39:14.845 14:52:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:39:14.845 14:52:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:39:14.845 14:52:34 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:39:14.845 14:52:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:39:14.845 14:52:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:39:14.845 14:52:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:14.845 14:52:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=88018 00:39:14.845 14:52:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:39:14.845 14:52:34 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 88018 00:39:14.845 14:52:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 88018 ']' 00:39:14.845 14:52:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:14.845 14:52:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:39:14.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:14.845 14:52:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:14.845 14:52:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:39:14.845 14:52:34 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:14.845 [2024-07-22 14:52:34.307025] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:39:14.845 [2024-07-22 14:52:34.307081] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:14.845 [2024-07-22 14:52:34.446765] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:15.104 [2024-07-22 14:52:34.497341] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:15.104 [2024-07-22 14:52:34.497383] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:15.104 [2024-07-22 14:52:34.497389] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:15.104 [2024-07-22 14:52:34.497393] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:15.105 [2024-07-22 14:52:34.497397] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:15.105 [2024-07-22 14:52:34.497703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:39:15.105 [2024-07-22 14:52:34.497779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:39:15.105 [2024-07-22 14:52:34.501736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:39:15.105 [2024-07-22 14:52:34.501736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:39:15.672 14:52:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:39:15.672 14:52:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:39:15.672 14:52:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:39:15.672 14:52:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:15.672 14:52:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:15.672 14:52:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:15.672 14:52:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:15.672 14:52:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:15.672 14:52:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:15.672 [2024-07-22 14:52:35.253886] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:15.672 14:52:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:15.672 14:52:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:39:15.672 14:52:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:39:15.672 14:52:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:15.672 14:52:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:39:15.672 14:52:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:39:15.672 14:52:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:39:15.672 14:52:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:15.672 14:52:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:15.930 Malloc0 00:39:15.930 [2024-07-22 14:52:35.324477] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:15.930 14:52:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:15.930 14:52:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:39:15.930 14:52:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:15.930 14:52:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:15.930 14:52:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=88090 00:39:15.930 14:52:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 88090 /var/tmp/bdevperf.sock 00:39:15.930 14:52:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 88090 ']' 00:39:15.930 14:52:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:39:15.930 14:52:35 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:39:15.930 14:52:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:15.930 14:52:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:39:15.930 14:52:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:39:15.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:15.930 14:52:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:15.930 14:52:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:39:15.930 14:52:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:39:15.930 14:52:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:15.930 14:52:35 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:15.931 14:52:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:15.931 { 00:39:15.931 "params": { 00:39:15.931 "name": "Nvme$subsystem", 00:39:15.931 "trtype": "$TEST_TRANSPORT", 00:39:15.931 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:15.931 "adrfam": "ipv4", 00:39:15.931 "trsvcid": "$NVMF_PORT", 00:39:15.931 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:15.931 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:15.931 "hdgst": ${hdgst:-false}, 00:39:15.931 "ddgst": ${ddgst:-false} 00:39:15.931 }, 00:39:15.931 "method": "bdev_nvme_attach_controller" 00:39:15.931 } 00:39:15.931 EOF 00:39:15.931 )") 00:39:15.931 14:52:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:39:15.931 14:52:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:39:15.931 14:52:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:39:15.931 14:52:35 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:39:15.931 "params": { 00:39:15.931 "name": "Nvme0", 00:39:15.931 "trtype": "tcp", 00:39:15.931 "traddr": "10.0.0.2", 00:39:15.931 "adrfam": "ipv4", 00:39:15.931 "trsvcid": "4420", 00:39:15.931 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:15.931 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:15.931 "hdgst": false, 00:39:15.931 "ddgst": false 00:39:15.931 }, 00:39:15.931 "method": "bdev_nvme_attach_controller" 00:39:15.931 }' 00:39:15.931 [2024-07-22 14:52:35.435859] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:39:15.931 [2024-07-22 14:52:35.436236] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88090 ] 00:39:16.189 [2024-07-22 14:52:35.576230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:16.189 [2024-07-22 14:52:35.624941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:16.189 Running I/O for 10 seconds... 00:39:16.758 14:52:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:39:16.758 14:52:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:39:16.758 14:52:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:39:16.758 14:52:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:16.758 14:52:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:16.758 14:52:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:16.758 14:52:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:16.758 14:52:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:39:16.758 14:52:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:39:16.758 14:52:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:39:16.758 14:52:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:39:16.758 14:52:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:39:16.758 14:52:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:39:16.758 14:52:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:39:16.758 14:52:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:39:16.758 14:52:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:39:16.758 14:52:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:16.758 14:52:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:16.758 14:52:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:16.758 14:52:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1196 00:39:16.758 14:52:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1196 -ge 100 ']' 00:39:16.758 14:52:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:39:16.758 14:52:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:39:16.758 14:52:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:39:16.758 14:52:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:39:16.758 14:52:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:16.758 14:52:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:16.758 [2024-07-22 14:52:36.383738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.758 [2024-07-22 14:52:36.383780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.758 [2024-07-22 14:52:36.383796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.758 [2024-07-22 14:52:36.383802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.758 [2024-07-22 14:52:36.383810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.758 [2024-07-22 14:52:36.383815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.758 [2024-07-22 14:52:36.383822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.758 [2024-07-22 14:52:36.383828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.758 [2024-07-22 14:52:36.383835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.758 [2024-07-22 14:52:36.383840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.758 [2024-07-22 14:52:36.383847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.758 [2024-07-22 14:52:36.383852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.758 [2024-07-22 14:52:36.383859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.758 [2024-07-22 14:52:36.383864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.758 [2024-07-22 14:52:36.383871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.758 [2024-07-22 14:52:36.383876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.758 [2024-07-22 14:52:36.383883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.758 [2024-07-22 14:52:36.383888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.758 [2024-07-22 14:52:36.383895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.758 [2024-07-22 14:52:36.383900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.758 [2024-07-22 14:52:36.383907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.758 [2024-07-22 14:52:36.383915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.758 [2024-07-22 14:52:36.383922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.758 [2024-07-22 14:52:36.383928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.758 [2024-07-22 14:52:36.383935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.758 [2024-07-22 14:52:36.383940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.758 [2024-07-22 14:52:36.383947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.758 [2024-07-22 14:52:36.383952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.758 [2024-07-22 14:52:36.383958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.758 [2024-07-22 14:52:36.383963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.758 [2024-07-22 14:52:36.383970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.758 [2024-07-22 14:52:36.383975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.758 [2024-07-22 14:52:36.383981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.758 [2024-07-22 14:52:36.383987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.758 [2024-07-22 14:52:36.383994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.758 [2024-07-22 14:52:36.384000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.758 [2024-07-22 14:52:36.384006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.758 [2024-07-22 14:52:36.384012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.758 [2024-07-22 14:52:36.384019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.758 [2024-07-22 14:52:36.384024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.758 [2024-07-22 14:52:36.384031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.758 [2024-07-22 14:52:36.384037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.758 [2024-07-22 14:52:36.384044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.758 [2024-07-22 14:52:36.384049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.758 [2024-07-22 14:52:36.384055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.758 [2024-07-22 14:52:36.384061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.758 [2024-07-22 14:52:36.384067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.758 [2024-07-22 14:52:36.384072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.758 [2024-07-22 14:52:36.384079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.758 [2024-07-22 14:52:36.384084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.758 [2024-07-22 14:52:36.384090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.758 [2024-07-22 14:52:36.384095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.758 [2024-07-22 14:52:36.384102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.758 [2024-07-22 14:52:36.384108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.758 [2024-07-22 14:52:36.384115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.759 [2024-07-22 14:52:36.384120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.759 [2024-07-22 14:52:36.384126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.759 [2024-07-22 14:52:36.384132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.759 [2024-07-22 14:52:36.384139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.759 [2024-07-22 14:52:36.384144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.759 [2024-07-22 14:52:36.384150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.759 [2024-07-22 14:52:36.384155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.759 [2024-07-22 14:52:36.384162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.759 [2024-07-22 14:52:36.384167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.759 [2024-07-22 14:52:36.384173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.759 [2024-07-22 14:52:36.384180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.759 [2024-07-22 14:52:36.384187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.759 [2024-07-22 14:52:36.384193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.759 [2024-07-22 14:52:36.384199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.759 [2024-07-22 14:52:36.384204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.759 [2024-07-22 14:52:36.384211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.759 [2024-07-22 14:52:36.384216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.759 [2024-07-22 14:52:36.384222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.759 [2024-07-22 14:52:36.384228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.759 [2024-07-22 14:52:36.384234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.759 [2024-07-22 14:52:36.384239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.759 [2024-07-22 14:52:36.384246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.759 [2024-07-22 14:52:36.384251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.759 [2024-07-22 14:52:36.384258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.759 [2024-07-22 14:52:36.384263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.759 [2024-07-22 14:52:36.384269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.759 [2024-07-22 14:52:36.384275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.759 [2024-07-22 14:52:36.384281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.759 [2024-07-22 14:52:36.384286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.759 [2024-07-22 14:52:36.384293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.759 [2024-07-22 14:52:36.384300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.759 [2024-07-22 14:52:36.384307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.759 [2024-07-22 14:52:36.384312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.759 [2024-07-22 14:52:36.384318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.759 [2024-07-22 14:52:36.384324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.759 [2024-07-22 14:52:36.384330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.759 [2024-07-22 14:52:36.384335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.759 [2024-07-22 14:52:36.384342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.759 [2024-07-22 14:52:36.384347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.759 [2024-07-22 14:52:36.384353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.759 [2024-07-22 14:52:36.384359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.759 [2024-07-22 14:52:36.384366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.759 [2024-07-22 14:52:36.384371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.759 [2024-07-22 14:52:36.384378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.759 [2024-07-22 14:52:36.384383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.759 [2024-07-22 14:52:36.384390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.759 [2024-07-22 14:52:36.384395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.759 [2024-07-22 14:52:36.384401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.759 [2024-07-22 14:52:36.384407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.759 [2024-07-22 14:52:36.384414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.759 [2024-07-22 14:52:36.384419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.759 [2024-07-22 14:52:36.384425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.759 [2024-07-22 14:52:36.384430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.759 [2024-07-22 14:52:36.384437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.759 [2024-07-22 14:52:36.384442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.759 [2024-07-22 14:52:36.384449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.759 [2024-07-22 14:52:36.384454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.759 [2024-07-22 14:52:36.384461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.759 [2024-07-22 14:52:36.384466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.759 [2024-07-22 14:52:36.384472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.759 [2024-07-22 14:52:36.384477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.759 [2024-07-22 14:52:36.384484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.759 [2024-07-22 14:52:36.384491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.759 [2024-07-22 14:52:36.384497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.759 [2024-07-22 14:52:36.384502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.759 [2024-07-22 14:52:36.384509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.759 [2024-07-22 14:52:36.384514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.759 [2024-07-22 14:52:36.384521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.759 [2024-07-22 14:52:36.384526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.759 [2024-07-22 14:52:36.384532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.759 [2024-07-22 14:52:36.384538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.759 [2024-07-22 14:52:36.384550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:39:16.759 [2024-07-22 14:52:36.384556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.759 [2024-07-22 14:52:36.384634] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1343550 was disconnected and freed. reset controller. 00:39:16.759 [2024-07-22 14:52:36.384757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:39:16.759 [2024-07-22 14:52:36.384776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.759 [2024-07-22 14:52:36.384784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:39:16.759 [2024-07-22 14:52:36.384789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.760 [2024-07-22 14:52:36.384796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:39:16.760 [2024-07-22 14:52:36.384802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.760 [2024-07-22 14:52:36.384808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:39:16.760 [2024-07-22 14:52:36.384814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:39:16.760 [2024-07-22 14:52:36.384820] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1343af0 is same with the state(5) to be set 00:39:16.760 [2024-07-22 14:52:36.385842] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:39:17.021 task offset: 36992 on job bdev=Nvme0n1 fails 00:39:17.021 00:39:17.021 Latency(us) 00:39:17.021 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:17.021 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:17.021 Job: Nvme0n1 ended in about 0.61 seconds with error 00:39:17.021 Verification LBA range: start 0x0 length 0x400 00:39:17.021 Nvme0n1 : 0.61 2103.81 131.49 105.19 0.00 28376.39 1674.17 25642.03 00:39:17.021 =================================================================================================================== 00:39:17.021 Total : 2103.81 131.49 105.19 0.00 28376.39 1674.17 25642.03 00:39:17.021 [2024-07-22 14:52:36.387829] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:17.021 [2024-07-22 14:52:36.387848] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1343af0 (9): Bad file descriptor 00:39:17.021 14:52:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:17.021 14:52:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:39:17.021 14:52:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:17.021 14:52:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:17.021 [2024-07-22 14:52:36.394215] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:39:17.021 14:52:36 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:17.021 14:52:36 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:39:17.960 14:52:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 88090 00:39:17.960 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (88090) - No such process 00:39:17.960 14:52:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:39:17.960 14:52:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:39:17.960 14:52:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:39:17.960 14:52:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:39:17.960 14:52:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:39:17.960 14:52:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:39:17.960 14:52:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:17.960 14:52:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:17.960 { 00:39:17.960 "params": { 00:39:17.960 "name": "Nvme$subsystem", 00:39:17.960 "trtype": "$TEST_TRANSPORT", 00:39:17.960 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:17.960 "adrfam": "ipv4", 00:39:17.960 "trsvcid": "$NVMF_PORT", 00:39:17.960 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:17.960 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:17.960 "hdgst": ${hdgst:-false}, 00:39:17.960 "ddgst": ${ddgst:-false} 00:39:17.960 }, 00:39:17.960 "method": "bdev_nvme_attach_controller" 00:39:17.960 } 00:39:17.960 EOF 00:39:17.960 )") 00:39:17.960 14:52:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:39:17.960 14:52:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:39:17.960 14:52:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:39:17.960 14:52:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:39:17.960 "params": { 00:39:17.960 "name": "Nvme0", 00:39:17.960 "trtype": "tcp", 00:39:17.960 "traddr": "10.0.0.2", 00:39:17.960 "adrfam": "ipv4", 00:39:17.960 "trsvcid": "4420", 00:39:17.960 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:17.960 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:17.960 "hdgst": false, 00:39:17.960 "ddgst": false 00:39:17.960 }, 00:39:17.960 "method": "bdev_nvme_attach_controller" 00:39:17.960 }' 00:39:17.960 [2024-07-22 14:52:37.460637] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:39:17.960 [2024-07-22 14:52:37.461066] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88140 ] 00:39:18.219 [2024-07-22 14:52:37.600603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:18.219 [2024-07-22 14:52:37.648977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:18.219 Running I/O for 1 seconds... 00:39:19.597 00:39:19.597 Latency(us) 00:39:19.597 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:19.597 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:39:19.597 Verification LBA range: start 0x0 length 0x400 00:39:19.597 Nvme0n1 : 1.00 2166.60 135.41 0.00 0.00 29049.73 3534.37 27015.71 00:39:19.597 =================================================================================================================== 00:39:19.598 Total : 2166.60 135.41 0.00 0.00 29049.73 3534.37 27015.71 00:39:19.598 14:52:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:39:19.598 14:52:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:39:19.598 14:52:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:39:19.598 14:52:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:39:19.598 14:52:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:39:19.598 14:52:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:39:19.598 14:52:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:39:19.598 14:52:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:39:19.598 14:52:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:39:19.598 14:52:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:39:19.598 14:52:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:39:19.598 rmmod nvme_tcp 00:39:19.598 rmmod nvme_fabrics 00:39:19.598 rmmod nvme_keyring 00:39:19.598 14:52:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:39:19.598 14:52:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:39:19.598 14:52:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:39:19.598 14:52:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 88018 ']' 00:39:19.598 14:52:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 88018 00:39:19.598 14:52:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 88018 ']' 00:39:19.598 14:52:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 88018 00:39:19.598 14:52:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:39:19.598 14:52:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:39:19.598 14:52:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 88018 00:39:19.598 14:52:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:39:19.598 14:52:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:39:19.598 killing process with pid 88018 00:39:19.598 14:52:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 88018' 00:39:19.598 14:52:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 88018 00:39:19.598 14:52:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 88018 00:39:19.856 [2024-07-22 14:52:39.355508] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:39:19.856 14:52:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:39:19.856 14:52:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:39:19.856 14:52:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:39:19.856 14:52:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:19.856 14:52:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:39:19.856 14:52:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:19.856 14:52:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:19.856 14:52:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:19.856 14:52:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:39:19.856 14:52:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:39:19.856 00:39:19.856 real 0m5.647s 00:39:19.856 user 0m21.958s 00:39:19.856 sys 0m1.340s 00:39:19.856 14:52:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:19.856 14:52:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:39:19.856 ************************************ 00:39:19.856 END TEST nvmf_host_management 00:39:19.856 ************************************ 00:39:19.856 14:52:39 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:39:19.856 14:52:39 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:39:19.856 14:52:39 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:19.856 14:52:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:20.116 ************************************ 00:39:20.116 START TEST nvmf_lvol 00:39:20.116 ************************************ 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:39:20.116 * Looking for test storage... 00:39:20.116 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:39:20.116 Cannot find device "nvmf_tgt_br" 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:39:20.116 Cannot find device "nvmf_tgt_br2" 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:39:20.116 Cannot find device "nvmf_tgt_br" 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:39:20.116 Cannot find device "nvmf_tgt_br2" 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:39:20.116 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:39:20.376 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:39:20.376 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:39:20.376 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:20.376 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:39:20.376 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:39:20.376 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:20.376 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:39:20.376 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:39:20.376 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:39:20.376 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:39:20.376 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:39:20.376 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:39:20.376 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:39:20.376 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:39:20.376 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:39:20.376 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:39:20.376 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:39:20.376 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:39:20.376 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:39:20.376 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:39:20.376 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:39:20.376 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:39:20.376 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:39:20.376 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:39:20.376 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:39:20.376 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:39:20.376 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:39:20.376 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:39:20.376 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:39:20.376 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:39:20.376 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:39:20.376 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:20.376 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:39:20.376 00:39:20.376 --- 10.0.0.2 ping statistics --- 00:39:20.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:20.376 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:39:20.376 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:39:20.376 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:39:20.376 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:39:20.376 00:39:20.376 --- 10.0.0.3 ping statistics --- 00:39:20.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:20.376 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:39:20.376 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:39:20.376 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:20.376 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:39:20.376 00:39:20.376 --- 10.0.0.1 ping statistics --- 00:39:20.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:20.376 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:39:20.376 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:20.376 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:39:20.376 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:39:20.376 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:20.376 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:39:20.376 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:39:20.376 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:20.377 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:39:20.377 14:52:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:39:20.636 14:52:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:39:20.636 14:52:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:39:20.636 14:52:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:39:20.636 14:52:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:20.636 14:52:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=88344 00:39:20.636 14:52:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:39:20.636 14:52:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 88344 00:39:20.636 14:52:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 88344 ']' 00:39:20.636 14:52:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:20.636 14:52:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:39:20.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:20.636 14:52:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:20.636 14:52:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:39:20.636 14:52:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:20.636 [2024-07-22 14:52:40.067907] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:39:20.636 [2024-07-22 14:52:40.067978] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:20.636 [2024-07-22 14:52:40.206519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:20.636 [2024-07-22 14:52:40.254642] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:20.636 [2024-07-22 14:52:40.254701] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:20.636 [2024-07-22 14:52:40.254707] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:20.636 [2024-07-22 14:52:40.254711] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:20.636 [2024-07-22 14:52:40.254715] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:20.636 [2024-07-22 14:52:40.254911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:39:20.636 [2024-07-22 14:52:40.255037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:20.636 [2024-07-22 14:52:40.255050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:39:21.613 14:52:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:39:21.613 14:52:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:39:21.613 14:52:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:39:21.613 14:52:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:21.613 14:52:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:21.613 14:52:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:21.613 14:52:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:21.613 [2024-07-22 14:52:41.218300] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:21.873 14:52:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:21.873 14:52:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:39:21.873 14:52:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:22.133 14:52:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:39:22.133 14:52:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:39:22.393 14:52:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:39:22.652 14:52:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=0cd21d05-b321-450e-afa7-3eb53d9216fc 00:39:22.652 14:52:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0cd21d05-b321-450e-afa7-3eb53d9216fc lvol 20 00:39:22.912 14:52:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=bc239661-c8f2-44b7-abe6-2c146f35cb33 00:39:22.912 14:52:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:22.912 14:52:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bc239661-c8f2-44b7-abe6-2c146f35cb33 00:39:23.172 14:52:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:23.438 [2024-07-22 14:52:42.862159] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:23.439 14:52:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:23.701 14:52:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=88486 00:39:23.701 14:52:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:39:23.701 14:52:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:39:24.640 14:52:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot bc239661-c8f2-44b7-abe6-2c146f35cb33 MY_SNAPSHOT 00:39:24.900 14:52:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=0ffe8b55-67e3-4dce-98df-ab995ddca22a 00:39:24.900 14:52:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize bc239661-c8f2-44b7-abe6-2c146f35cb33 30 00:39:25.172 14:52:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 0ffe8b55-67e3-4dce-98df-ab995ddca22a MY_CLONE 00:39:25.433 14:52:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=4e4cdb45-fdcb-42a0-b43d-285b03a97c49 00:39:25.433 14:52:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 4e4cdb45-fdcb-42a0-b43d-285b03a97c49 00:39:26.002 14:52:45 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 88486 00:39:34.143 Initializing NVMe Controllers 00:39:34.143 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:39:34.143 Controller IO queue size 128, less than required. 00:39:34.143 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:34.143 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:39:34.143 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:39:34.143 Initialization complete. Launching workers. 00:39:34.143 ======================================================== 00:39:34.143 Latency(us) 00:39:34.144 Device Information : IOPS MiB/s Average min max 00:39:34.144 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11765.50 45.96 10879.66 1910.34 52272.16 00:39:34.144 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11676.90 45.61 10963.30 2113.03 42948.29 00:39:34.144 ======================================================== 00:39:34.144 Total : 23442.40 91.57 10921.32 1910.34 52272.16 00:39:34.144 00:39:34.144 14:52:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:34.144 14:52:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete bc239661-c8f2-44b7-abe6-2c146f35cb33 00:39:34.144 14:52:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0cd21d05-b321-450e-afa7-3eb53d9216fc 00:39:34.402 14:52:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:39:34.402 14:52:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:39:34.402 14:52:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:39:34.402 14:52:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:39:34.402 14:52:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:39:34.402 14:52:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:39:34.402 14:52:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:39:34.402 14:52:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:39:34.402 14:52:53 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:39:34.402 rmmod nvme_tcp 00:39:34.402 rmmod nvme_fabrics 00:39:34.661 rmmod nvme_keyring 00:39:34.661 14:52:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:39:34.661 14:52:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:39:34.661 14:52:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:39:34.661 14:52:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 88344 ']' 00:39:34.661 14:52:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 88344 00:39:34.661 14:52:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 88344 ']' 00:39:34.661 14:52:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 88344 00:39:34.661 14:52:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:39:34.661 14:52:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:39:34.661 14:52:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 88344 00:39:34.661 14:52:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:39:34.661 14:52:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:39:34.661 killing process with pid 88344 00:39:34.661 14:52:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 88344' 00:39:34.661 14:52:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 88344 00:39:34.661 14:52:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 88344 00:39:34.920 14:52:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:39:34.920 14:52:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:39:34.920 14:52:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:39:34.920 14:52:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:34.920 14:52:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:39:34.920 14:52:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:34.920 14:52:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:34.920 14:52:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:34.920 14:52:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:39:34.921 00:39:34.921 real 0m14.880s 00:39:34.921 user 1m3.451s 00:39:34.921 sys 0m2.792s 00:39:34.921 14:52:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:34.921 14:52:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:39:34.921 ************************************ 00:39:34.921 END TEST nvmf_lvol 00:39:34.921 ************************************ 00:39:34.921 14:52:54 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:39:34.921 14:52:54 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:39:34.921 14:52:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:34.921 14:52:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:34.921 ************************************ 00:39:34.921 START TEST nvmf_lvs_grow 00:39:34.921 ************************************ 00:39:34.921 14:52:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:39:34.921 * Looking for test storage... 00:39:35.181 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:39:35.181 14:52:54 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:39:35.181 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:39:35.181 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:35.181 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:35.181 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:35.181 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:35.181 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:35.181 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:35.181 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:35.181 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:35.181 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:35.181 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:35.181 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:39:35.181 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:39:35.181 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:39:35.182 Cannot find device "nvmf_tgt_br" 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:39:35.182 Cannot find device "nvmf_tgt_br2" 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:39:35.182 Cannot find device "nvmf_tgt_br" 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:39:35.182 Cannot find device "nvmf_tgt_br2" 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:39:35.182 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:39:35.182 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:39:35.182 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:39:35.443 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:39:35.443 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:39:35.443 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:39:35.443 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:39:35.443 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:39:35.443 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:39:35.443 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:39:35.443 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:39:35.443 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:39:35.443 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:39:35.443 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:39:35.443 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:39:35.443 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:39:35.443 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:39:35.443 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:39:35.443 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:39:35.443 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:39:35.443 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:39:35.443 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:39:35.443 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:35.443 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:39:35.443 00:39:35.443 --- 10.0.0.2 ping statistics --- 00:39:35.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:35.443 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:39:35.443 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:39:35.443 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:39:35.443 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.105 ms 00:39:35.443 00:39:35.443 --- 10.0.0.3 ping statistics --- 00:39:35.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:35.443 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:39:35.443 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:39:35.443 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:35.443 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:39:35.443 00:39:35.443 --- 10.0.0.1 ping statistics --- 00:39:35.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:35.443 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:39:35.443 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:35.443 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:39:35.443 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:39:35.443 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:35.443 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:39:35.443 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:39:35.443 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:35.443 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:39:35.443 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:39:35.443 14:52:54 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:39:35.443 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:39:35.443 14:52:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:39:35.443 14:52:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:35.443 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:39:35.443 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=88850 00:39:35.444 14:52:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 88850 00:39:35.444 14:52:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 88850 ']' 00:39:35.444 14:52:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:35.444 14:52:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:39:35.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:35.444 14:52:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:35.444 14:52:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:39:35.444 14:52:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:35.444 [2024-07-22 14:52:55.053431] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:39:35.444 [2024-07-22 14:52:55.053499] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:35.704 [2024-07-22 14:52:55.191780] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:35.704 [2024-07-22 14:52:55.236063] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:35.704 [2024-07-22 14:52:55.236104] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:35.704 [2024-07-22 14:52:55.236111] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:35.704 [2024-07-22 14:52:55.236116] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:35.704 [2024-07-22 14:52:55.236120] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:35.704 [2024-07-22 14:52:55.236143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:36.274 14:52:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:39:36.274 14:52:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:39:36.274 14:52:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:39:36.274 14:52:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:36.274 14:52:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:36.533 14:52:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:36.533 14:52:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:36.533 [2024-07-22 14:52:56.084468] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:36.533 14:52:56 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:39:36.533 14:52:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:36.533 14:52:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:36.533 14:52:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:36.533 ************************************ 00:39:36.533 START TEST lvs_grow_clean 00:39:36.533 ************************************ 00:39:36.533 14:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:39:36.533 14:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:39:36.533 14:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:39:36.533 14:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:39:36.533 14:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:39:36.533 14:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:39:36.533 14:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:39:36.533 14:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:39:36.533 14:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:39:36.533 14:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:36.793 14:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:39:36.793 14:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:39:37.053 14:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=acfdf7c0-7a62-4a3f-9b9b-e23f606c517e 00:39:37.053 14:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u acfdf7c0-7a62-4a3f-9b9b-e23f606c517e 00:39:37.053 14:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:39:37.312 14:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:39:37.312 14:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:39:37.312 14:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u acfdf7c0-7a62-4a3f-9b9b-e23f606c517e lvol 150 00:39:37.312 14:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=14cdf9bc-c258-401c-bd3c-e2ca74eb1513 00:39:37.312 14:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:39:37.312 14:52:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:39:37.573 [2024-07-22 14:52:57.110705] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:39:37.573 [2024-07-22 14:52:57.110770] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:39:37.573 true 00:39:37.573 14:52:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u acfdf7c0-7a62-4a3f-9b9b-e23f606c517e 00:39:37.573 14:52:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:39:37.858 14:52:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:39:37.858 14:52:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:38.118 14:52:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 14cdf9bc-c258-401c-bd3c-e2ca74eb1513 00:39:38.118 14:52:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:38.377 [2024-07-22 14:52:57.841665] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:38.377 14:52:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:38.637 14:52:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=89006 00:39:38.637 14:52:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:39:38.637 14:52:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:38.637 14:52:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 89006 /var/tmp/bdevperf.sock 00:39:38.637 14:52:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 89006 ']' 00:39:38.637 14:52:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:38.637 14:52:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:39:38.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:38.637 14:52:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:38.637 14:52:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:39:38.637 14:52:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:39:38.637 [2024-07-22 14:52:58.088482] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:39:38.637 [2024-07-22 14:52:58.088544] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89006 ] 00:39:38.637 [2024-07-22 14:52:58.228170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:38.897 [2024-07-22 14:52:58.273624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:39:39.464 14:52:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:39:39.464 14:52:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:39:39.464 14:52:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:39:39.723 Nvme0n1 00:39:39.723 14:52:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:39:39.983 [ 00:39:39.983 { 00:39:39.983 "aliases": [ 00:39:39.983 "14cdf9bc-c258-401c-bd3c-e2ca74eb1513" 00:39:39.983 ], 00:39:39.983 "assigned_rate_limits": { 00:39:39.983 "r_mbytes_per_sec": 0, 00:39:39.983 "rw_ios_per_sec": 0, 00:39:39.983 "rw_mbytes_per_sec": 0, 00:39:39.983 "w_mbytes_per_sec": 0 00:39:39.983 }, 00:39:39.983 "block_size": 4096, 00:39:39.983 "claimed": false, 00:39:39.983 "driver_specific": { 00:39:39.983 "mp_policy": "active_passive", 00:39:39.983 "nvme": [ 00:39:39.983 { 00:39:39.983 "ctrlr_data": { 00:39:39.983 "ana_reporting": false, 00:39:39.983 "cntlid": 1, 00:39:39.983 "firmware_revision": "24.05.1", 00:39:39.983 "model_number": "SPDK bdev Controller", 00:39:39.983 "multi_ctrlr": true, 00:39:39.983 "oacs": { 00:39:39.983 "firmware": 0, 00:39:39.983 "format": 0, 00:39:39.983 "ns_manage": 0, 00:39:39.983 "security": 0 00:39:39.983 }, 00:39:39.983 "serial_number": "SPDK0", 00:39:39.983 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:39.983 "vendor_id": "0x8086" 00:39:39.983 }, 00:39:39.983 "ns_data": { 00:39:39.983 "can_share": true, 00:39:39.983 "id": 1 00:39:39.983 }, 00:39:39.983 "trid": { 00:39:39.983 "adrfam": "IPv4", 00:39:39.983 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:39.983 "traddr": "10.0.0.2", 00:39:39.983 "trsvcid": "4420", 00:39:39.983 "trtype": "TCP" 00:39:39.983 }, 00:39:39.983 "vs": { 00:39:39.983 "nvme_version": "1.3" 00:39:39.983 } 00:39:39.983 } 00:39:39.983 ] 00:39:39.983 }, 00:39:39.983 "memory_domains": [ 00:39:39.983 { 00:39:39.983 "dma_device_id": "system", 00:39:39.983 "dma_device_type": 1 00:39:39.983 } 00:39:39.983 ], 00:39:39.983 "name": "Nvme0n1", 00:39:39.983 "num_blocks": 38912, 00:39:39.983 "product_name": "NVMe disk", 00:39:39.983 "supported_io_types": { 00:39:39.983 "abort": true, 00:39:39.983 "compare": true, 00:39:39.983 "compare_and_write": true, 00:39:39.983 "flush": true, 00:39:39.983 "nvme_admin": true, 00:39:39.983 "nvme_io": true, 00:39:39.983 "read": true, 00:39:39.983 "reset": true, 00:39:39.983 "unmap": true, 00:39:39.983 "write": true, 00:39:39.983 "write_zeroes": true 00:39:39.983 }, 00:39:39.983 "uuid": "14cdf9bc-c258-401c-bd3c-e2ca74eb1513", 00:39:39.983 "zoned": false 00:39:39.983 } 00:39:39.983 ] 00:39:39.983 14:52:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=89048 00:39:39.983 14:52:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:39.983 14:52:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:39:39.983 Running I/O for 10 seconds... 00:39:40.921 Latency(us) 00:39:40.921 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:40.921 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:40.921 Nvme0n1 : 1.00 11345.00 44.32 0.00 0.00 0.00 0.00 0.00 00:39:40.921 =================================================================================================================== 00:39:40.921 Total : 11345.00 44.32 0.00 0.00 0.00 0.00 0.00 00:39:40.921 00:39:41.924 14:53:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u acfdf7c0-7a62-4a3f-9b9b-e23f606c517e 00:39:41.924 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:41.924 Nvme0n1 : 2.00 11342.00 44.30 0.00 0.00 0.00 0.00 0.00 00:39:41.924 =================================================================================================================== 00:39:41.924 Total : 11342.00 44.30 0.00 0.00 0.00 0.00 0.00 00:39:41.924 00:39:42.185 true 00:39:42.185 14:53:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u acfdf7c0-7a62-4a3f-9b9b-e23f606c517e 00:39:42.185 14:53:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:39:42.444 14:53:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:39:42.444 14:53:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:39:42.444 14:53:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 89048 00:39:43.016 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:43.016 Nvme0n1 : 3.00 11283.67 44.08 0.00 0.00 0.00 0.00 0.00 00:39:43.016 =================================================================================================================== 00:39:43.016 Total : 11283.67 44.08 0.00 0.00 0.00 0.00 0.00 00:39:43.016 00:39:43.953 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:43.953 Nvme0n1 : 4.00 11203.25 43.76 0.00 0.00 0.00 0.00 0.00 00:39:43.953 =================================================================================================================== 00:39:43.953 Total : 11203.25 43.76 0.00 0.00 0.00 0.00 0.00 00:39:43.953 00:39:44.892 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:44.892 Nvme0n1 : 5.00 11158.00 43.59 0.00 0.00 0.00 0.00 0.00 00:39:44.892 =================================================================================================================== 00:39:44.892 Total : 11158.00 43.59 0.00 0.00 0.00 0.00 0.00 00:39:44.892 00:39:45.903 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:45.903 Nvme0n1 : 6.00 11075.33 43.26 0.00 0.00 0.00 0.00 0.00 00:39:45.903 =================================================================================================================== 00:39:45.903 Total : 11075.33 43.26 0.00 0.00 0.00 0.00 0.00 00:39:45.903 00:39:47.285 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:47.285 Nvme0n1 : 7.00 11044.00 43.14 0.00 0.00 0.00 0.00 0.00 00:39:47.285 =================================================================================================================== 00:39:47.285 Total : 11044.00 43.14 0.00 0.00 0.00 0.00 0.00 00:39:47.285 00:39:48.223 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:48.223 Nvme0n1 : 8.00 11003.50 42.98 0.00 0.00 0.00 0.00 0.00 00:39:48.223 =================================================================================================================== 00:39:48.223 Total : 11003.50 42.98 0.00 0.00 0.00 0.00 0.00 00:39:48.223 00:39:49.160 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:49.160 Nvme0n1 : 9.00 10966.56 42.84 0.00 0.00 0.00 0.00 0.00 00:39:49.160 =================================================================================================================== 00:39:49.160 Total : 10966.56 42.84 0.00 0.00 0.00 0.00 0.00 00:39:49.160 00:39:50.099 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:50.099 Nvme0n1 : 10.00 10963.40 42.83 0.00 0.00 0.00 0.00 0.00 00:39:50.099 =================================================================================================================== 00:39:50.099 Total : 10963.40 42.83 0.00 0.00 0.00 0.00 0.00 00:39:50.099 00:39:50.099 00:39:50.099 Latency(us) 00:39:50.099 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:50.099 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:50.099 Nvme0n1 : 10.01 10968.28 42.84 0.00 0.00 11666.42 3777.62 24611.77 00:39:50.099 =================================================================================================================== 00:39:50.099 Total : 10968.28 42.84 0.00 0.00 11666.42 3777.62 24611.77 00:39:50.099 0 00:39:50.099 14:53:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 89006 00:39:50.099 14:53:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 89006 ']' 00:39:50.100 14:53:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 89006 00:39:50.100 14:53:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:39:50.100 14:53:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:39:50.100 14:53:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89006 00:39:50.100 14:53:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:39:50.100 14:53:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:39:50.100 killing process with pid 89006 00:39:50.100 14:53:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89006' 00:39:50.100 14:53:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 89006 00:39:50.100 Received shutdown signal, test time was about 10.000000 seconds 00:39:50.100 00:39:50.100 Latency(us) 00:39:50.100 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:50.100 =================================================================================================================== 00:39:50.100 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:50.100 14:53:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 89006 00:39:50.359 14:53:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:50.359 14:53:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:50.618 14:53:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u acfdf7c0-7a62-4a3f-9b9b-e23f606c517e 00:39:50.619 14:53:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:39:50.878 14:53:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:39:50.878 14:53:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:39:50.878 14:53:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:39:50.878 [2024-07-22 14:53:10.500307] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:39:51.138 14:53:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u acfdf7c0-7a62-4a3f-9b9b-e23f606c517e 00:39:51.138 14:53:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:39:51.138 14:53:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u acfdf7c0-7a62-4a3f-9b9b-e23f606c517e 00:39:51.138 14:53:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:51.138 14:53:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:51.138 14:53:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:51.138 14:53:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:51.138 14:53:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:51.138 14:53:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:51.138 14:53:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:51.138 14:53:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:39:51.138 14:53:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u acfdf7c0-7a62-4a3f-9b9b-e23f606c517e 00:39:51.138 2024/07/22 14:53:10 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:acfdf7c0-7a62-4a3f-9b9b-e23f606c517e], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:39:51.138 request: 00:39:51.138 { 00:39:51.138 "method": "bdev_lvol_get_lvstores", 00:39:51.138 "params": { 00:39:51.138 "uuid": "acfdf7c0-7a62-4a3f-9b9b-e23f606c517e" 00:39:51.138 } 00:39:51.138 } 00:39:51.138 Got JSON-RPC error response 00:39:51.138 GoRPCClient: error on JSON-RPC call 00:39:51.138 14:53:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:39:51.138 14:53:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:51.138 14:53:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:51.138 14:53:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:51.138 14:53:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:51.398 aio_bdev 00:39:51.398 14:53:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 14cdf9bc-c258-401c-bd3c-e2ca74eb1513 00:39:51.398 14:53:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=14cdf9bc-c258-401c-bd3c-e2ca74eb1513 00:39:51.398 14:53:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:39:51.398 14:53:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:39:51.398 14:53:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:39:51.398 14:53:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:39:51.398 14:53:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:39:51.657 14:53:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 14cdf9bc-c258-401c-bd3c-e2ca74eb1513 -t 2000 00:39:51.657 [ 00:39:51.657 { 00:39:51.657 "aliases": [ 00:39:51.657 "lvs/lvol" 00:39:51.657 ], 00:39:51.657 "assigned_rate_limits": { 00:39:51.657 "r_mbytes_per_sec": 0, 00:39:51.657 "rw_ios_per_sec": 0, 00:39:51.657 "rw_mbytes_per_sec": 0, 00:39:51.657 "w_mbytes_per_sec": 0 00:39:51.658 }, 00:39:51.658 "block_size": 4096, 00:39:51.658 "claimed": false, 00:39:51.658 "driver_specific": { 00:39:51.658 "lvol": { 00:39:51.658 "base_bdev": "aio_bdev", 00:39:51.658 "clone": false, 00:39:51.658 "esnap_clone": false, 00:39:51.658 "lvol_store_uuid": "acfdf7c0-7a62-4a3f-9b9b-e23f606c517e", 00:39:51.658 "num_allocated_clusters": 38, 00:39:51.658 "snapshot": false, 00:39:51.658 "thin_provision": false 00:39:51.658 } 00:39:51.658 }, 00:39:51.658 "name": "14cdf9bc-c258-401c-bd3c-e2ca74eb1513", 00:39:51.658 "num_blocks": 38912, 00:39:51.658 "product_name": "Logical Volume", 00:39:51.658 "supported_io_types": { 00:39:51.658 "abort": false, 00:39:51.658 "compare": false, 00:39:51.658 "compare_and_write": false, 00:39:51.658 "flush": false, 00:39:51.658 "nvme_admin": false, 00:39:51.658 "nvme_io": false, 00:39:51.658 "read": true, 00:39:51.658 "reset": true, 00:39:51.658 "unmap": true, 00:39:51.658 "write": true, 00:39:51.658 "write_zeroes": true 00:39:51.658 }, 00:39:51.658 "uuid": "14cdf9bc-c258-401c-bd3c-e2ca74eb1513", 00:39:51.658 "zoned": false 00:39:51.658 } 00:39:51.658 ] 00:39:51.918 14:53:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:39:51.918 14:53:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:39:51.918 14:53:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u acfdf7c0-7a62-4a3f-9b9b-e23f606c517e 00:39:51.918 14:53:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:39:51.918 14:53:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:39:51.918 14:53:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u acfdf7c0-7a62-4a3f-9b9b-e23f606c517e 00:39:52.177 14:53:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:39:52.177 14:53:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 14cdf9bc-c258-401c-bd3c-e2ca74eb1513 00:39:52.436 14:53:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u acfdf7c0-7a62-4a3f-9b9b-e23f606c517e 00:39:52.696 14:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:39:52.696 14:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:39:53.266 ************************************ 00:39:53.266 END TEST lvs_grow_clean 00:39:53.266 ************************************ 00:39:53.266 00:39:53.266 real 0m16.570s 00:39:53.266 user 0m15.733s 00:39:53.266 sys 0m1.955s 00:39:53.266 14:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:53.266 14:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:39:53.266 14:53:12 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:39:53.266 14:53:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:39:53.266 14:53:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:53.266 14:53:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:53.266 ************************************ 00:39:53.266 START TEST lvs_grow_dirty 00:39:53.266 ************************************ 00:39:53.266 14:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:39:53.266 14:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:39:53.266 14:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:39:53.266 14:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:39:53.266 14:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:39:53.266 14:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:39:53.266 14:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:39:53.266 14:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:39:53.266 14:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:39:53.266 14:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:53.525 14:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:39:53.525 14:53:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:39:53.783 14:53:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=fd4bf8c5-2985-4fbe-94e5-a64baad55258 00:39:53.783 14:53:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd4bf8c5-2985-4fbe-94e5-a64baad55258 00:39:53.783 14:53:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:39:53.783 14:53:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:39:53.783 14:53:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:39:53.783 14:53:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u fd4bf8c5-2985-4fbe-94e5-a64baad55258 lvol 150 00:39:54.067 14:53:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=518f4bcd-884d-4219-a3e7-eb4f49f816a8 00:39:54.067 14:53:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:39:54.067 14:53:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:39:54.329 [2024-07-22 14:53:13.727808] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:39:54.329 [2024-07-22 14:53:13.727880] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:39:54.329 true 00:39:54.329 14:53:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd4bf8c5-2985-4fbe-94e5-a64baad55258 00:39:54.329 14:53:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:39:54.329 14:53:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:39:54.329 14:53:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:54.588 14:53:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 518f4bcd-884d-4219-a3e7-eb4f49f816a8 00:39:54.847 14:53:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:54.847 [2024-07-22 14:53:14.438731] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:54.847 14:53:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:55.107 14:53:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:39:55.107 14:53:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=89428 00:39:55.107 14:53:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:55.107 14:53:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 89428 /var/tmp/bdevperf.sock 00:39:55.107 14:53:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 89428 ']' 00:39:55.107 14:53:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:55.107 14:53:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:39:55.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:55.107 14:53:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:55.107 14:53:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:39:55.107 14:53:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:39:55.107 [2024-07-22 14:53:14.683409] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:39:55.107 [2024-07-22 14:53:14.683465] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89428 ] 00:39:55.367 [2024-07-22 14:53:14.823032] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:55.367 [2024-07-22 14:53:14.870066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:39:55.936 14:53:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:39:55.937 14:53:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:39:55.937 14:53:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:39:56.196 Nvme0n1 00:39:56.196 14:53:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:39:56.455 [ 00:39:56.455 { 00:39:56.455 "aliases": [ 00:39:56.455 "518f4bcd-884d-4219-a3e7-eb4f49f816a8" 00:39:56.455 ], 00:39:56.455 "assigned_rate_limits": { 00:39:56.455 "r_mbytes_per_sec": 0, 00:39:56.455 "rw_ios_per_sec": 0, 00:39:56.455 "rw_mbytes_per_sec": 0, 00:39:56.455 "w_mbytes_per_sec": 0 00:39:56.455 }, 00:39:56.455 "block_size": 4096, 00:39:56.455 "claimed": false, 00:39:56.455 "driver_specific": { 00:39:56.455 "mp_policy": "active_passive", 00:39:56.455 "nvme": [ 00:39:56.455 { 00:39:56.455 "ctrlr_data": { 00:39:56.455 "ana_reporting": false, 00:39:56.455 "cntlid": 1, 00:39:56.455 "firmware_revision": "24.05.1", 00:39:56.455 "model_number": "SPDK bdev Controller", 00:39:56.455 "multi_ctrlr": true, 00:39:56.455 "oacs": { 00:39:56.455 "firmware": 0, 00:39:56.455 "format": 0, 00:39:56.455 "ns_manage": 0, 00:39:56.455 "security": 0 00:39:56.455 }, 00:39:56.455 "serial_number": "SPDK0", 00:39:56.455 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:56.455 "vendor_id": "0x8086" 00:39:56.455 }, 00:39:56.455 "ns_data": { 00:39:56.455 "can_share": true, 00:39:56.455 "id": 1 00:39:56.455 }, 00:39:56.455 "trid": { 00:39:56.455 "adrfam": "IPv4", 00:39:56.455 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:56.455 "traddr": "10.0.0.2", 00:39:56.455 "trsvcid": "4420", 00:39:56.455 "trtype": "TCP" 00:39:56.455 }, 00:39:56.455 "vs": { 00:39:56.455 "nvme_version": "1.3" 00:39:56.455 } 00:39:56.455 } 00:39:56.455 ] 00:39:56.455 }, 00:39:56.455 "memory_domains": [ 00:39:56.455 { 00:39:56.455 "dma_device_id": "system", 00:39:56.455 "dma_device_type": 1 00:39:56.455 } 00:39:56.455 ], 00:39:56.455 "name": "Nvme0n1", 00:39:56.455 "num_blocks": 38912, 00:39:56.455 "product_name": "NVMe disk", 00:39:56.455 "supported_io_types": { 00:39:56.455 "abort": true, 00:39:56.455 "compare": true, 00:39:56.455 "compare_and_write": true, 00:39:56.455 "flush": true, 00:39:56.455 "nvme_admin": true, 00:39:56.455 "nvme_io": true, 00:39:56.456 "read": true, 00:39:56.456 "reset": true, 00:39:56.456 "unmap": true, 00:39:56.456 "write": true, 00:39:56.456 "write_zeroes": true 00:39:56.456 }, 00:39:56.456 "uuid": "518f4bcd-884d-4219-a3e7-eb4f49f816a8", 00:39:56.456 "zoned": false 00:39:56.456 } 00:39:56.456 ] 00:39:56.456 14:53:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=89480 00:39:56.456 14:53:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:56.456 14:53:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:39:56.456 Running I/O for 10 seconds... 00:39:57.836 Latency(us) 00:39:57.836 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:57.836 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:57.836 Nvme0n1 : 1.00 11591.00 45.28 0.00 0.00 0.00 0.00 0.00 00:39:57.836 =================================================================================================================== 00:39:57.836 Total : 11591.00 45.28 0.00 0.00 0.00 0.00 0.00 00:39:57.836 00:39:58.420 14:53:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u fd4bf8c5-2985-4fbe-94e5-a64baad55258 00:39:58.679 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:58.679 Nvme0n1 : 2.00 11713.00 45.75 0.00 0.00 0.00 0.00 0.00 00:39:58.679 =================================================================================================================== 00:39:58.679 Total : 11713.00 45.75 0.00 0.00 0.00 0.00 0.00 00:39:58.679 00:39:58.679 true 00:39:58.679 14:53:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:39:58.680 14:53:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd4bf8c5-2985-4fbe-94e5-a64baad55258 00:39:58.939 14:53:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:39:58.939 14:53:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:39:58.939 14:53:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 89480 00:39:59.507 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:59.507 Nvme0n1 : 3.00 11488.67 44.88 0.00 0.00 0.00 0.00 0.00 00:39:59.507 =================================================================================================================== 00:39:59.507 Total : 11488.67 44.88 0.00 0.00 0.00 0.00 0.00 00:39:59.507 00:40:00.444 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:00.444 Nvme0n1 : 4.00 11370.75 44.42 0.00 0.00 0.00 0.00 0.00 00:40:00.444 =================================================================================================================== 00:40:00.444 Total : 11370.75 44.42 0.00 0.00 0.00 0.00 0.00 00:40:00.444 00:40:01.822 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:01.822 Nvme0n1 : 5.00 11287.40 44.09 0.00 0.00 0.00 0.00 0.00 00:40:01.822 =================================================================================================================== 00:40:01.822 Total : 11287.40 44.09 0.00 0.00 0.00 0.00 0.00 00:40:01.822 00:40:02.783 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:02.783 Nvme0n1 : 6.00 10690.33 41.76 0.00 0.00 0.00 0.00 0.00 00:40:02.783 =================================================================================================================== 00:40:02.783 Total : 10690.33 41.76 0.00 0.00 0.00 0.00 0.00 00:40:02.783 00:40:03.725 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:03.725 Nvme0n1 : 7.00 10149.14 39.65 0.00 0.00 0.00 0.00 0.00 00:40:03.725 =================================================================================================================== 00:40:03.725 Total : 10149.14 39.65 0.00 0.00 0.00 0.00 0.00 00:40:03.725 00:40:04.661 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:04.661 Nvme0n1 : 8.00 10198.12 39.84 0.00 0.00 0.00 0.00 0.00 00:40:04.661 =================================================================================================================== 00:40:04.661 Total : 10198.12 39.84 0.00 0.00 0.00 0.00 0.00 00:40:04.661 00:40:05.598 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:05.598 Nvme0n1 : 9.00 10219.00 39.92 0.00 0.00 0.00 0.00 0.00 00:40:05.598 =================================================================================================================== 00:40:05.598 Total : 10219.00 39.92 0.00 0.00 0.00 0.00 0.00 00:40:05.598 00:40:06.536 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:06.536 Nvme0n1 : 10.00 10151.20 39.65 0.00 0.00 0.00 0.00 0.00 00:40:06.536 =================================================================================================================== 00:40:06.536 Total : 10151.20 39.65 0.00 0.00 0.00 0.00 0.00 00:40:06.536 00:40:06.794 00:40:06.794 Latency(us) 00:40:06.794 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:06.794 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:06.794 Nvme0n1 : 10.17 9995.58 39.05 0.00 0.00 12802.03 5208.54 637387.68 00:40:06.794 =================================================================================================================== 00:40:06.794 Total : 9995.58 39.05 0.00 0.00 12802.03 5208.54 637387.68 00:40:06.794 0 00:40:06.794 14:53:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 89428 00:40:06.794 14:53:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 89428 ']' 00:40:06.794 14:53:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 89428 00:40:06.794 14:53:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:40:06.794 14:53:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:40:06.794 14:53:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89428 00:40:06.794 14:53:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:40:06.794 14:53:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:40:06.794 killing process with pid 89428 00:40:06.794 14:53:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89428' 00:40:06.794 14:53:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 89428 00:40:06.794 Received shutdown signal, test time was about 10.000000 seconds 00:40:06.794 00:40:06.794 Latency(us) 00:40:06.794 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:06.794 =================================================================================================================== 00:40:06.794 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:06.794 14:53:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 89428 00:40:07.053 14:53:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:07.311 14:53:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:07.311 14:53:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd4bf8c5-2985-4fbe-94e5-a64baad55258 00:40:07.311 14:53:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:40:07.571 14:53:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:40:07.571 14:53:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:40:07.571 14:53:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 88850 00:40:07.571 14:53:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 88850 00:40:07.571 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 88850 Killed "${NVMF_APP[@]}" "$@" 00:40:07.571 14:53:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:40:07.571 14:53:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:40:07.571 14:53:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:40:07.571 14:53:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:40:07.571 14:53:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:07.571 14:53:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=89639 00:40:07.571 14:53:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 89639 00:40:07.571 14:53:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 89639 ']' 00:40:07.571 14:53:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:07.571 14:53:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:40:07.571 14:53:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:07.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:07.571 14:53:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:40:07.571 14:53:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:40:07.571 14:53:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:07.830 [2024-07-22 14:53:27.218109] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:40:07.830 [2024-07-22 14:53:27.218194] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:07.830 [2024-07-22 14:53:27.348391] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:07.830 [2024-07-22 14:53:27.398693] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:07.830 [2024-07-22 14:53:27.398743] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:07.830 [2024-07-22 14:53:27.398749] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:07.830 [2024-07-22 14:53:27.398754] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:07.830 [2024-07-22 14:53:27.398758] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:07.830 [2024-07-22 14:53:27.398781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:08.089 14:53:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:40:08.089 14:53:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:40:08.089 14:53:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:40:08.089 14:53:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:08.089 14:53:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:08.089 14:53:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:08.089 14:53:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:08.349 [2024-07-22 14:53:27.733056] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:40:08.349 [2024-07-22 14:53:27.733514] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:40:08.349 [2024-07-22 14:53:27.735857] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:40:08.349 14:53:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:40:08.349 14:53:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 518f4bcd-884d-4219-a3e7-eb4f49f816a8 00:40:08.349 14:53:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=518f4bcd-884d-4219-a3e7-eb4f49f816a8 00:40:08.349 14:53:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:40:08.349 14:53:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:40:08.349 14:53:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:40:08.349 14:53:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:40:08.349 14:53:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:40:08.609 14:53:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 518f4bcd-884d-4219-a3e7-eb4f49f816a8 -t 2000 00:40:08.609 [ 00:40:08.609 { 00:40:08.609 "aliases": [ 00:40:08.609 "lvs/lvol" 00:40:08.609 ], 00:40:08.609 "assigned_rate_limits": { 00:40:08.609 "r_mbytes_per_sec": 0, 00:40:08.609 "rw_ios_per_sec": 0, 00:40:08.609 "rw_mbytes_per_sec": 0, 00:40:08.609 "w_mbytes_per_sec": 0 00:40:08.609 }, 00:40:08.609 "block_size": 4096, 00:40:08.609 "claimed": false, 00:40:08.609 "driver_specific": { 00:40:08.609 "lvol": { 00:40:08.609 "base_bdev": "aio_bdev", 00:40:08.609 "clone": false, 00:40:08.609 "esnap_clone": false, 00:40:08.609 "lvol_store_uuid": "fd4bf8c5-2985-4fbe-94e5-a64baad55258", 00:40:08.609 "num_allocated_clusters": 38, 00:40:08.609 "snapshot": false, 00:40:08.609 "thin_provision": false 00:40:08.609 } 00:40:08.609 }, 00:40:08.609 "name": "518f4bcd-884d-4219-a3e7-eb4f49f816a8", 00:40:08.609 "num_blocks": 38912, 00:40:08.609 "product_name": "Logical Volume", 00:40:08.609 "supported_io_types": { 00:40:08.609 "abort": false, 00:40:08.609 "compare": false, 00:40:08.609 "compare_and_write": false, 00:40:08.609 "flush": false, 00:40:08.609 "nvme_admin": false, 00:40:08.609 "nvme_io": false, 00:40:08.609 "read": true, 00:40:08.609 "reset": true, 00:40:08.609 "unmap": true, 00:40:08.609 "write": true, 00:40:08.609 "write_zeroes": true 00:40:08.609 }, 00:40:08.609 "uuid": "518f4bcd-884d-4219-a3e7-eb4f49f816a8", 00:40:08.609 "zoned": false 00:40:08.609 } 00:40:08.609 ] 00:40:08.609 14:53:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:40:08.609 14:53:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd4bf8c5-2985-4fbe-94e5-a64baad55258 00:40:08.609 14:53:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:40:08.868 14:53:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:40:08.868 14:53:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd4bf8c5-2985-4fbe-94e5-a64baad55258 00:40:08.868 14:53:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:40:09.128 14:53:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:40:09.128 14:53:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:09.388 [2024-07-22 14:53:28.768665] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:40:09.388 14:53:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd4bf8c5-2985-4fbe-94e5-a64baad55258 00:40:09.388 14:53:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:40:09.388 14:53:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd4bf8c5-2985-4fbe-94e5-a64baad55258 00:40:09.388 14:53:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:09.388 14:53:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:09.388 14:53:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:09.388 14:53:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:09.388 14:53:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:09.388 14:53:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:09.388 14:53:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:09.388 14:53:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:40:09.388 14:53:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd4bf8c5-2985-4fbe-94e5-a64baad55258 00:40:09.388 2024/07/22 14:53:29 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:fd4bf8c5-2985-4fbe-94e5-a64baad55258], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:40:09.388 request: 00:40:09.388 { 00:40:09.388 "method": "bdev_lvol_get_lvstores", 00:40:09.388 "params": { 00:40:09.388 "uuid": "fd4bf8c5-2985-4fbe-94e5-a64baad55258" 00:40:09.388 } 00:40:09.388 } 00:40:09.388 Got JSON-RPC error response 00:40:09.388 GoRPCClient: error on JSON-RPC call 00:40:09.648 14:53:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:40:09.648 14:53:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:40:09.648 14:53:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:40:09.648 14:53:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:40:09.648 14:53:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:09.648 aio_bdev 00:40:09.648 14:53:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 518f4bcd-884d-4219-a3e7-eb4f49f816a8 00:40:09.648 14:53:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=518f4bcd-884d-4219-a3e7-eb4f49f816a8 00:40:09.648 14:53:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:40:09.648 14:53:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:40:09.648 14:53:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:40:09.648 14:53:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:40:09.648 14:53:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:40:09.907 14:53:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 518f4bcd-884d-4219-a3e7-eb4f49f816a8 -t 2000 00:40:10.167 [ 00:40:10.167 { 00:40:10.167 "aliases": [ 00:40:10.167 "lvs/lvol" 00:40:10.167 ], 00:40:10.167 "assigned_rate_limits": { 00:40:10.167 "r_mbytes_per_sec": 0, 00:40:10.167 "rw_ios_per_sec": 0, 00:40:10.167 "rw_mbytes_per_sec": 0, 00:40:10.167 "w_mbytes_per_sec": 0 00:40:10.167 }, 00:40:10.167 "block_size": 4096, 00:40:10.167 "claimed": false, 00:40:10.167 "driver_specific": { 00:40:10.167 "lvol": { 00:40:10.167 "base_bdev": "aio_bdev", 00:40:10.167 "clone": false, 00:40:10.167 "esnap_clone": false, 00:40:10.167 "lvol_store_uuid": "fd4bf8c5-2985-4fbe-94e5-a64baad55258", 00:40:10.167 "num_allocated_clusters": 38, 00:40:10.167 "snapshot": false, 00:40:10.167 "thin_provision": false 00:40:10.167 } 00:40:10.167 }, 00:40:10.167 "name": "518f4bcd-884d-4219-a3e7-eb4f49f816a8", 00:40:10.167 "num_blocks": 38912, 00:40:10.167 "product_name": "Logical Volume", 00:40:10.167 "supported_io_types": { 00:40:10.167 "abort": false, 00:40:10.167 "compare": false, 00:40:10.167 "compare_and_write": false, 00:40:10.167 "flush": false, 00:40:10.167 "nvme_admin": false, 00:40:10.167 "nvme_io": false, 00:40:10.167 "read": true, 00:40:10.167 "reset": true, 00:40:10.167 "unmap": true, 00:40:10.167 "write": true, 00:40:10.167 "write_zeroes": true 00:40:10.167 }, 00:40:10.167 "uuid": "518f4bcd-884d-4219-a3e7-eb4f49f816a8", 00:40:10.167 "zoned": false 00:40:10.167 } 00:40:10.167 ] 00:40:10.167 14:53:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:40:10.167 14:53:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:40:10.168 14:53:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd4bf8c5-2985-4fbe-94e5-a64baad55258 00:40:10.432 14:53:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:40:10.432 14:53:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fd4bf8c5-2985-4fbe-94e5-a64baad55258 00:40:10.432 14:53:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:40:10.719 14:53:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:40:10.719 14:53:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 518f4bcd-884d-4219-a3e7-eb4f49f816a8 00:40:10.719 14:53:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fd4bf8c5-2985-4fbe-94e5-a64baad55258 00:40:10.979 14:53:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:40:11.239 14:53:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:40:11.807 00:40:11.807 real 0m18.413s 00:40:11.807 user 0m38.631s 00:40:11.807 sys 0m6.714s 00:40:11.807 14:53:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:11.807 14:53:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:40:11.807 ************************************ 00:40:11.807 END TEST lvs_grow_dirty 00:40:11.807 ************************************ 00:40:11.807 14:53:31 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:40:11.807 14:53:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:40:11.807 14:53:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:40:11.807 14:53:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:40:11.807 14:53:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:40:11.807 14:53:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:40:11.807 14:53:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:40:11.807 14:53:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:40:11.807 14:53:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:40:11.807 nvmf_trace.0 00:40:11.807 14:53:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:40:11.807 14:53:31 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:40:11.807 14:53:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:40:11.807 14:53:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:40:11.807 14:53:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:40:11.807 14:53:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:40:11.807 14:53:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:40:11.807 14:53:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:40:11.807 rmmod nvme_tcp 00:40:11.807 rmmod nvme_fabrics 00:40:12.067 rmmod nvme_keyring 00:40:12.067 14:53:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:40:12.067 14:53:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:40:12.067 14:53:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:40:12.067 14:53:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 89639 ']' 00:40:12.067 14:53:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 89639 00:40:12.067 14:53:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 89639 ']' 00:40:12.067 14:53:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 89639 00:40:12.067 14:53:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:40:12.067 14:53:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:40:12.067 14:53:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89639 00:40:12.067 14:53:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:40:12.067 14:53:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:40:12.067 killing process with pid 89639 00:40:12.067 14:53:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89639' 00:40:12.067 14:53:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 89639 00:40:12.067 14:53:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 89639 00:40:12.067 14:53:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:40:12.067 14:53:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:40:12.067 14:53:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:40:12.067 14:53:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:40:12.067 14:53:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:40:12.067 14:53:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:12.067 14:53:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:12.067 14:53:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:12.328 14:53:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:40:12.328 00:40:12.328 real 0m37.334s 00:40:12.328 user 0m59.330s 00:40:12.328 sys 0m9.477s 00:40:12.328 14:53:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:12.328 ************************************ 00:40:12.328 END TEST nvmf_lvs_grow 00:40:12.328 ************************************ 00:40:12.328 14:53:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:12.328 14:53:31 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:40:12.328 14:53:31 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:40:12.328 14:53:31 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:40:12.328 14:53:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:12.328 ************************************ 00:40:12.328 START TEST nvmf_bdev_io_wait 00:40:12.328 ************************************ 00:40:12.328 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:40:12.328 * Looking for test storage... 00:40:12.328 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:40:12.328 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:40:12.328 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:40:12.328 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:12.328 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:12.328 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:12.328 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:12.328 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:12.328 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:12.328 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:12.328 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:12.328 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:12.328 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:40:12.588 14:53:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:40:12.588 Cannot find device "nvmf_tgt_br" 00:40:12.589 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:40:12.589 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:40:12.589 Cannot find device "nvmf_tgt_br2" 00:40:12.589 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:40:12.589 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:40:12.589 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:40:12.589 Cannot find device "nvmf_tgt_br" 00:40:12.589 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:40:12.589 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:40:12.589 Cannot find device "nvmf_tgt_br2" 00:40:12.589 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:40:12.589 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:40:12.589 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:40:12.589 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:40:12.589 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:40:12.589 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:40:12.589 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:40:12.589 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:40:12.589 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:40:12.589 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:40:12.589 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:40:12.589 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:40:12.589 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:40:12.589 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:40:12.589 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:40:12.589 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:40:12.589 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:40:12.589 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:40:12.589 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:40:12.589 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:40:12.849 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:40:12.849 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:40:12.849 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:40:12.849 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:40:12.849 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:40:12.849 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:40:12.849 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:40:12.849 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:40:12.849 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:40:12.849 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:40:12.849 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:40:12.849 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:40:12.849 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:40:12.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:12.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:40:12.849 00:40:12.849 --- 10.0.0.2 ping statistics --- 00:40:12.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:12.849 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:40:12.849 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:40:12.849 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:40:12.849 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.098 ms 00:40:12.849 00:40:12.849 --- 10.0.0.3 ping statistics --- 00:40:12.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:12.849 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:40:12.849 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:40:12.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:12.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:40:12.849 00:40:12.849 --- 10.0.0.1 ping statistics --- 00:40:12.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:12.849 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:40:12.849 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:12.849 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:40:12.849 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:40:12.849 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:12.849 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:40:12.849 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:40:12.849 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:12.849 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:40:12.849 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:40:12.849 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:40:12.849 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:40:12.849 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:40:12.849 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:12.849 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=90026 00:40:12.849 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:40:12.849 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 90026 00:40:12.849 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 90026 ']' 00:40:12.849 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:12.849 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:40:12.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:12.849 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:12.849 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:40:12.849 14:53:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:12.849 [2024-07-22 14:53:32.418080] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:40:12.849 [2024-07-22 14:53:32.418534] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:13.109 [2024-07-22 14:53:32.558288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:13.109 [2024-07-22 14:53:32.608174] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:13.109 [2024-07-22 14:53:32.608238] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:13.109 [2024-07-22 14:53:32.608246] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:13.109 [2024-07-22 14:53:32.608251] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:13.109 [2024-07-22 14:53:32.608256] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:13.109 [2024-07-22 14:53:32.608421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:40:13.109 [2024-07-22 14:53:32.608728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:13.109 [2024-07-22 14:53:32.608732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:40:13.109 [2024-07-22 14:53:32.608511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:40:13.678 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:40:13.678 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:40:13.678 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:40:13.678 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:13.678 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:13.937 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:13.937 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:40:13.937 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:13.937 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:13.937 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:13.937 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:40:13.937 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:13.937 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:13.937 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:13.937 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:13.937 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:13.937 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:13.937 [2024-07-22 14:53:33.421077] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:13.937 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:13.937 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:13.937 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:13.937 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:13.937 Malloc0 00:40:13.937 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:13.937 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:13.937 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:13.937 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:13.937 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:13.937 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:13.937 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:13.937 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:13.937 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:13.937 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:13.937 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:13.937 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:13.937 [2024-07-22 14:53:33.488114] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:13.937 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:13.937 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=90085 00:40:13.937 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:40:13.937 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:40:13.937 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:40:13.937 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=90087 00:40:13.937 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:40:13.937 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:13.937 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:13.937 { 00:40:13.937 "params": { 00:40:13.937 "name": "Nvme$subsystem", 00:40:13.937 "trtype": "$TEST_TRANSPORT", 00:40:13.937 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:13.937 "adrfam": "ipv4", 00:40:13.937 "trsvcid": "$NVMF_PORT", 00:40:13.937 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:13.937 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:13.937 "hdgst": ${hdgst:-false}, 00:40:13.937 "ddgst": ${ddgst:-false} 00:40:13.937 }, 00:40:13.937 "method": "bdev_nvme_attach_controller" 00:40:13.937 } 00:40:13.937 EOF 00:40:13.937 )") 00:40:13.937 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:40:13.937 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:40:13.937 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=90089 00:40:13.937 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:40:13.937 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:40:13.937 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:13.937 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:13.937 { 00:40:13.937 "params": { 00:40:13.937 "name": "Nvme$subsystem", 00:40:13.937 "trtype": "$TEST_TRANSPORT", 00:40:13.937 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:13.937 "adrfam": "ipv4", 00:40:13.937 "trsvcid": "$NVMF_PORT", 00:40:13.937 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:13.937 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:13.937 "hdgst": ${hdgst:-false}, 00:40:13.937 "ddgst": ${ddgst:-false} 00:40:13.937 }, 00:40:13.937 "method": "bdev_nvme_attach_controller" 00:40:13.937 } 00:40:13.937 EOF 00:40:13.937 )") 00:40:13.937 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:40:13.937 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:40:13.937 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:40:13.937 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:40:13.937 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:40:13.937 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:40:13.938 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:40:13.938 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:40:13.938 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:40:13.938 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:13.938 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:13.938 { 00:40:13.938 "params": { 00:40:13.938 "name": "Nvme$subsystem", 00:40:13.938 "trtype": "$TEST_TRANSPORT", 00:40:13.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:13.938 "adrfam": "ipv4", 00:40:13.938 "trsvcid": "$NVMF_PORT", 00:40:13.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:13.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:13.938 "hdgst": ${hdgst:-false}, 00:40:13.938 "ddgst": ${ddgst:-false} 00:40:13.938 }, 00:40:13.938 "method": "bdev_nvme_attach_controller" 00:40:13.938 } 00:40:13.938 EOF 00:40:13.938 )") 00:40:13.938 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:13.938 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:13.938 { 00:40:13.938 "params": { 00:40:13.938 "name": "Nvme$subsystem", 00:40:13.938 "trtype": "$TEST_TRANSPORT", 00:40:13.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:13.938 "adrfam": "ipv4", 00:40:13.938 "trsvcid": "$NVMF_PORT", 00:40:13.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:13.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:13.938 "hdgst": ${hdgst:-false}, 00:40:13.938 "ddgst": ${ddgst:-false} 00:40:13.938 }, 00:40:13.938 "method": "bdev_nvme_attach_controller" 00:40:13.938 } 00:40:13.938 EOF 00:40:13.938 )") 00:40:13.938 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:40:13.938 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=90091 00:40:13.938 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:40:13.938 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:40:13.938 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:40:13.938 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:40:13.938 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:40:13.938 "params": { 00:40:13.938 "name": "Nvme1", 00:40:13.938 "trtype": "tcp", 00:40:13.938 "traddr": "10.0.0.2", 00:40:13.938 "adrfam": "ipv4", 00:40:13.938 "trsvcid": "4420", 00:40:13.938 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:13.938 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:13.938 "hdgst": false, 00:40:13.938 "ddgst": false 00:40:13.938 }, 00:40:13.938 "method": "bdev_nvme_attach_controller" 00:40:13.938 }' 00:40:13.938 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:40:13.938 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:40:13.938 "params": { 00:40:13.938 "name": "Nvme1", 00:40:13.938 "trtype": "tcp", 00:40:13.938 "traddr": "10.0.0.2", 00:40:13.938 "adrfam": "ipv4", 00:40:13.938 "trsvcid": "4420", 00:40:13.938 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:13.938 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:13.938 "hdgst": false, 00:40:13.938 "ddgst": false 00:40:13.938 }, 00:40:13.938 "method": "bdev_nvme_attach_controller" 00:40:13.938 }' 00:40:13.938 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:40:13.938 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:40:13.938 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:40:13.938 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:40:13.938 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:40:13.938 "params": { 00:40:13.938 "name": "Nvme1", 00:40:13.938 "trtype": "tcp", 00:40:13.938 "traddr": "10.0.0.2", 00:40:13.938 "adrfam": "ipv4", 00:40:13.938 "trsvcid": "4420", 00:40:13.938 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:13.938 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:13.938 "hdgst": false, 00:40:13.938 "ddgst": false 00:40:13.938 }, 00:40:13.938 "method": "bdev_nvme_attach_controller" 00:40:13.938 }' 00:40:13.938 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:40:13.938 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:40:13.938 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:40:13.938 "params": { 00:40:13.938 "name": "Nvme1", 00:40:13.938 "trtype": "tcp", 00:40:13.938 "traddr": "10.0.0.2", 00:40:13.938 "adrfam": "ipv4", 00:40:13.938 "trsvcid": "4420", 00:40:13.938 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:13.938 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:13.938 "hdgst": false, 00:40:13.938 "ddgst": false 00:40:13.938 }, 00:40:13.938 "method": "bdev_nvme_attach_controller" 00:40:13.938 }' 00:40:13.938 [2024-07-22 14:53:33.551949] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:40:13.938 [2024-07-22 14:53:33.552427] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:40:13.938 [2024-07-22 14:53:33.552575] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:40:13.938 [2024-07-22 14:53:33.552619] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:40:13.938 [2024-07-22 14:53:33.565621] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:40:13.938 [2024-07-22 14:53:33.565687] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:40:14.197 [2024-07-22 14:53:33.571517] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:40:14.197 [2024-07-22 14:53:33.571615] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:40:14.197 14:53:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 90085 00:40:14.197 [2024-07-22 14:53:33.748481] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:14.197 [2024-07-22 14:53:33.783261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:40:14.197 [2024-07-22 14:53:33.794515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:14.454 [2024-07-22 14:53:33.828985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:40:14.454 [2024-07-22 14:53:33.871489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:14.454 [2024-07-22 14:53:33.916323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:40:14.454 [2024-07-22 14:53:33.942527] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:14.454 Running I/O for 1 seconds... 00:40:14.454 Running I/O for 1 seconds... 00:40:14.454 [2024-07-22 14:53:33.978012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:40:14.454 Running I/O for 1 seconds... 00:40:14.714 Running I/O for 1 seconds... 00:40:15.653 00:40:15.653 Latency(us) 00:40:15.653 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:15.653 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:40:15.653 Nvme1n1 : 1.01 9387.37 36.67 0.00 0.00 13569.93 8585.50 35486.74 00:40:15.653 =================================================================================================================== 00:40:15.653 Total : 9387.37 36.67 0.00 0.00 13569.93 8585.50 35486.74 00:40:15.653 00:40:15.653 Latency(us) 00:40:15.653 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:15.653 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:40:15.653 Nvme1n1 : 1.01 6591.91 25.75 0.00 0.00 19350.53 9615.76 35486.74 00:40:15.653 =================================================================================================================== 00:40:15.653 Total : 6591.91 25.75 0.00 0.00 19350.53 9615.76 35486.74 00:40:15.653 00:40:15.653 Latency(us) 00:40:15.653 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:15.653 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:40:15.653 Nvme1n1 : 1.00 6930.10 27.07 0.00 0.00 18424.14 4378.61 36173.58 00:40:15.653 =================================================================================================================== 00:40:15.653 Total : 6930.10 27.07 0.00 0.00 18424.14 4378.61 36173.58 00:40:15.653 00:40:15.653 Latency(us) 00:40:15.653 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:15.653 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:40:15.653 Nvme1n1 : 1.00 224947.94 878.70 0.00 0.00 566.78 216.43 897.90 00:40:15.653 =================================================================================================================== 00:40:15.653 Total : 224947.94 878.70 0.00 0.00 566.78 216.43 897.90 00:40:15.912 14:53:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 90087 00:40:15.912 14:53:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 90089 00:40:15.912 14:53:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 90091 00:40:15.912 14:53:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:15.912 14:53:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:15.912 14:53:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:15.912 14:53:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:15.912 14:53:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:40:15.912 14:53:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:40:15.912 14:53:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:40:15.912 14:53:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:40:16.171 14:53:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:40:16.171 14:53:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:40:16.171 14:53:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:40:16.171 14:53:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:40:16.171 rmmod nvme_tcp 00:40:16.171 rmmod nvme_fabrics 00:40:16.171 rmmod nvme_keyring 00:40:16.171 14:53:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:40:16.171 14:53:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:40:16.171 14:53:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:40:16.171 14:53:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 90026 ']' 00:40:16.171 14:53:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 90026 00:40:16.171 14:53:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 90026 ']' 00:40:16.171 14:53:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 90026 00:40:16.171 14:53:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:40:16.171 14:53:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:40:16.171 14:53:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 90026 00:40:16.171 14:53:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:40:16.171 14:53:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:40:16.171 14:53:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 90026' 00:40:16.171 killing process with pid 90026 00:40:16.172 14:53:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 90026 00:40:16.172 14:53:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 90026 00:40:16.430 14:53:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:40:16.430 14:53:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:40:16.430 14:53:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:40:16.430 14:53:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:40:16.430 14:53:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:40:16.430 14:53:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:16.430 14:53:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:16.430 14:53:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:16.430 14:53:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:40:16.430 ************************************ 00:40:16.430 END TEST nvmf_bdev_io_wait 00:40:16.430 ************************************ 00:40:16.430 00:40:16.430 real 0m4.174s 00:40:16.430 user 0m18.179s 00:40:16.430 sys 0m1.705s 00:40:16.430 14:53:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:16.430 14:53:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:40:16.689 14:53:36 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:40:16.689 14:53:36 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:40:16.689 14:53:36 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:40:16.689 14:53:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:16.689 ************************************ 00:40:16.689 START TEST nvmf_queue_depth 00:40:16.689 ************************************ 00:40:16.689 14:53:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:40:16.689 * Looking for test storage... 00:40:16.689 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:40:16.689 14:53:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:40:16.689 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:40:16.689 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:16.689 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:16.689 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:16.689 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:16.689 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:16.689 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:16.689 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:16.689 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:16.689 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:16.689 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:16.689 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:40:16.689 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:40:16.689 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:16.689 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:16.689 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:40:16.689 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:40:16.690 Cannot find device "nvmf_tgt_br" 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:40:16.690 Cannot find device "nvmf_tgt_br2" 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:40:16.690 Cannot find device "nvmf_tgt_br" 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:40:16.690 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:40:16.949 Cannot find device "nvmf_tgt_br2" 00:40:16.949 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:40:16.949 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:40:16.949 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:40:16.949 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:40:16.949 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:40:16.949 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:40:16.949 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:40:16.949 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:40:16.949 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:40:16.949 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:40:16.949 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:40:16.949 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:40:16.949 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:40:16.949 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:40:16.949 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:40:16.949 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:40:16.949 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:40:16.949 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:40:16.949 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:40:16.949 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:40:16.949 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:40:16.949 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:40:16.949 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:40:16.949 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:40:16.949 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:40:16.949 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:40:16.949 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:40:16.949 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:40:16.949 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:40:16.949 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:40:16.949 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:40:16.949 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:40:16.949 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:40:16.949 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:16.949 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:40:16.949 00:40:16.949 --- 10.0.0.2 ping statistics --- 00:40:16.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:16.949 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:40:16.949 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:40:16.949 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:40:16.949 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:40:16.949 00:40:16.949 --- 10.0.0.3 ping statistics --- 00:40:16.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:16.949 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:40:16.949 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:40:17.208 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:17.208 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:40:17.208 00:40:17.208 --- 10.0.0.1 ping statistics --- 00:40:17.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:17.208 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:40:17.208 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:17.208 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:40:17.208 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:40:17.208 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:17.208 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:40:17.208 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:40:17.208 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:17.208 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:40:17.208 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:40:17.208 14:53:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:40:17.208 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:40:17.208 14:53:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:40:17.208 14:53:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:17.208 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=90322 00:40:17.208 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:40:17.209 14:53:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 90322 00:40:17.209 14:53:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 90322 ']' 00:40:17.209 14:53:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:17.209 14:53:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:40:17.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:17.209 14:53:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:17.209 14:53:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:40:17.209 14:53:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:17.209 [2024-07-22 14:53:36.671195] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:40:17.209 [2024-07-22 14:53:36.671255] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:17.209 [2024-07-22 14:53:36.808903] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:17.467 [2024-07-22 14:53:36.855253] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:17.467 [2024-07-22 14:53:36.855305] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:17.467 [2024-07-22 14:53:36.855312] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:17.467 [2024-07-22 14:53:36.855316] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:17.467 [2024-07-22 14:53:36.855320] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:17.467 [2024-07-22 14:53:36.855340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:40:18.049 14:53:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:40:18.049 14:53:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:40:18.049 14:53:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:40:18.049 14:53:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:18.049 14:53:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:18.049 14:53:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:18.049 14:53:37 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:18.049 14:53:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:18.049 14:53:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:18.049 [2024-07-22 14:53:37.588511] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:18.049 14:53:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:18.049 14:53:37 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:18.049 14:53:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:18.049 14:53:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:18.049 Malloc0 00:40:18.049 14:53:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:18.049 14:53:37 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:18.049 14:53:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:18.049 14:53:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:18.049 14:53:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:18.049 14:53:37 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:18.049 14:53:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:18.049 14:53:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:18.049 14:53:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:18.049 14:53:37 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:18.049 14:53:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:18.049 14:53:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:18.049 [2024-07-22 14:53:37.657949] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:18.049 14:53:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:18.049 14:53:37 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=90374 00:40:18.049 14:53:37 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:40:18.049 14:53:37 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:18.049 14:53:37 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 90374 /var/tmp/bdevperf.sock 00:40:18.049 14:53:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 90374 ']' 00:40:18.049 14:53:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:18.049 14:53:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:40:18.049 14:53:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:18.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:18.050 14:53:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:40:18.050 14:53:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:18.318 [2024-07-22 14:53:37.715106] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:40:18.318 [2024-07-22 14:53:37.715169] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90374 ] 00:40:18.319 [2024-07-22 14:53:37.853233] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:18.319 [2024-07-22 14:53:37.899346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:19.253 14:53:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:40:19.253 14:53:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:40:19.253 14:53:38 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:40:19.253 14:53:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:19.253 14:53:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:19.253 NVMe0n1 00:40:19.253 14:53:38 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:19.253 14:53:38 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:40:19.253 Running I/O for 10 seconds... 00:40:29.245 00:40:29.245 Latency(us) 00:40:29.245 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:29.245 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:40:29.245 Verification LBA range: start 0x0 length 0x4000 00:40:29.245 NVMe0n1 : 10.06 11851.10 46.29 0.00 0.00 86086.61 24039.41 77841.89 00:40:29.245 =================================================================================================================== 00:40:29.245 Total : 11851.10 46.29 0.00 0.00 86086.61 24039.41 77841.89 00:40:29.245 0 00:40:29.245 14:53:48 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 90374 00:40:29.245 14:53:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 90374 ']' 00:40:29.245 14:53:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 90374 00:40:29.245 14:53:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:40:29.245 14:53:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:40:29.245 14:53:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 90374 00:40:29.505 14:53:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:40:29.505 14:53:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:40:29.505 killing process with pid 90374 00:40:29.505 14:53:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 90374' 00:40:29.505 Received shutdown signal, test time was about 10.000000 seconds 00:40:29.505 00:40:29.505 Latency(us) 00:40:29.505 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:29.505 =================================================================================================================== 00:40:29.505 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:29.505 14:53:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 90374 00:40:29.505 14:53:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 90374 00:40:29.505 14:53:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:40:29.505 14:53:49 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:40:29.505 14:53:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:40:29.505 14:53:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:40:29.764 14:53:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:40:29.764 14:53:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:40:29.764 14:53:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:40:29.764 14:53:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:40:29.764 rmmod nvme_tcp 00:40:29.764 rmmod nvme_fabrics 00:40:29.764 rmmod nvme_keyring 00:40:29.764 14:53:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:40:29.764 14:53:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:40:29.764 14:53:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:40:29.764 14:53:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 90322 ']' 00:40:29.764 14:53:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 90322 00:40:29.764 14:53:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 90322 ']' 00:40:29.765 14:53:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 90322 00:40:29.765 14:53:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:40:29.765 14:53:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:40:29.765 14:53:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 90322 00:40:29.765 14:53:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:40:29.765 14:53:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:40:29.765 killing process with pid 90322 00:40:29.765 14:53:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 90322' 00:40:29.765 14:53:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 90322 00:40:29.765 14:53:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 90322 00:40:30.041 14:53:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:40:30.041 14:53:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:40:30.041 14:53:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:40:30.041 14:53:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:40:30.041 14:53:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:40:30.041 14:53:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:30.041 14:53:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:30.041 14:53:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:30.041 14:53:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:40:30.041 00:40:30.041 real 0m13.427s 00:40:30.041 user 0m23.204s 00:40:30.041 sys 0m1.928s 00:40:30.041 14:53:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:30.041 14:53:49 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:30.041 ************************************ 00:40:30.041 END TEST nvmf_queue_depth 00:40:30.041 ************************************ 00:40:30.041 14:53:49 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:40:30.041 14:53:49 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:40:30.041 14:53:49 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:40:30.041 14:53:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:30.041 ************************************ 00:40:30.041 START TEST nvmf_target_multipath 00:40:30.041 ************************************ 00:40:30.041 14:53:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:40:30.041 * Looking for test storage... 00:40:30.300 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:40:30.300 14:53:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:40:30.300 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:40:30.300 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:30.300 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:30.300 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:30.300 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:30.300 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:30.300 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:30.300 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:30.300 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:30.300 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:30.300 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:30.300 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:40:30.300 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:40:30.300 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:30.300 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:30.300 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:40:30.300 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:30.300 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:40:30.300 14:53:49 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:30.300 14:53:49 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:30.300 14:53:49 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:30.300 14:53:49 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:30.300 14:53:49 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:40:30.301 Cannot find device "nvmf_tgt_br" 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:40:30.301 Cannot find device "nvmf_tgt_br2" 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:40:30.301 Cannot find device "nvmf_tgt_br" 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:40:30.301 Cannot find device "nvmf_tgt_br2" 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:40:30.301 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:40:30.301 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:40:30.301 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:40:30.560 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:40:30.560 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:40:30.560 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:40:30.560 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:40:30.560 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:40:30.560 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:40:30.560 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:40:30.560 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:40:30.560 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:40:30.560 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:40:30.560 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:40:30.560 14:53:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:40:30.560 14:53:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:40:30.560 14:53:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:40:30.560 14:53:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:40:30.560 14:53:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:40:30.560 14:53:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:40:30.560 14:53:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:40:30.560 14:53:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:40:30.560 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:30.560 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:40:30.560 00:40:30.560 --- 10.0.0.2 ping statistics --- 00:40:30.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:30.560 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:40:30.560 14:53:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:40:30.560 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:40:30.560 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:40:30.560 00:40:30.560 --- 10.0.0.3 ping statistics --- 00:40:30.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:30.560 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:40:30.560 14:53:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:40:30.560 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:30.560 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:40:30.560 00:40:30.560 --- 10.0.0.1 ping statistics --- 00:40:30.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:30.560 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:40:30.560 14:53:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:30.560 14:53:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:40:30.560 14:53:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:40:30.560 14:53:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:30.560 14:53:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:40:30.560 14:53:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:40:30.560 14:53:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:30.560 14:53:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:40:30.561 14:53:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:40:30.561 14:53:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:40:30.561 14:53:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:40:30.561 14:53:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:40:30.561 14:53:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:40:30.561 14:53:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@720 -- # xtrace_disable 00:40:30.561 14:53:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:40:30.561 14:53:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=90705 00:40:30.561 14:53:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:40:30.561 14:53:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 90705 00:40:30.561 14:53:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@827 -- # '[' -z 90705 ']' 00:40:30.561 14:53:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:30.561 14:53:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@832 -- # local max_retries=100 00:40:30.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:30.561 14:53:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:30.561 14:53:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # xtrace_disable 00:40:30.561 14:53:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:40:30.561 [2024-07-22 14:53:50.152324] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:40:30.561 [2024-07-22 14:53:50.152399] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:30.820 [2024-07-22 14:53:50.290679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:30.820 [2024-07-22 14:53:50.344163] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:30.820 [2024-07-22 14:53:50.344221] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:30.820 [2024-07-22 14:53:50.344228] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:30.820 [2024-07-22 14:53:50.344233] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:30.820 [2024-07-22 14:53:50.344237] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:30.820 [2024-07-22 14:53:50.344374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:40:30.820 [2024-07-22 14:53:50.344626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:40:30.820 [2024-07-22 14:53:50.344809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:30.820 [2024-07-22 14:53:50.344812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:40:31.388 14:53:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:40:31.388 14:53:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@860 -- # return 0 00:40:31.388 14:53:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:40:31.388 14:53:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:31.388 14:53:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:40:31.647 14:53:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:31.647 14:53:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:40:31.647 [2024-07-22 14:53:51.228371] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:31.647 14:53:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:40:31.906 Malloc0 00:40:31.906 14:53:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:40:32.166 14:53:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:32.423 14:53:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:32.423 [2024-07-22 14:53:52.043354] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:32.681 14:53:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:40:32.681 [2024-07-22 14:53:52.219315] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:40:32.681 14:53:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:40:32.940 14:53:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:40:33.199 14:53:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:40:33.199 14:53:52 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1194 -- # local i=0 00:40:33.199 14:53:52 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:40:33.199 14:53:52 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:40:33.199 14:53:52 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1201 -- # sleep 2 00:40:35.102 14:53:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:40:35.102 14:53:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:40:35.102 14:53:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:40:35.102 14:53:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:40:35.102 14:53:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:40:35.102 14:53:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # return 0 00:40:35.102 14:53:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:40:35.102 14:53:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:40:35.102 14:53:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:40:35.102 14:53:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:40:35.102 14:53:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:40:35.103 14:53:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:40:35.103 14:53:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:40:35.103 14:53:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:40:35.103 14:53:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:40:35.103 14:53:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:40:35.103 14:53:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:40:35.103 14:53:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:40:35.103 14:53:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:40:35.103 14:53:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:40:35.103 14:53:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:40:35.103 14:53:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:40:35.103 14:53:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:40:35.103 14:53:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:40:35.103 14:53:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:40:35.103 14:53:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:40:35.103 14:53:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:40:35.103 14:53:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:40:35.103 14:53:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:40:35.103 14:53:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:40:35.103 14:53:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:40:35.103 14:53:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:40:35.103 14:53:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=90837 00:40:35.103 14:53:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:40:35.103 14:53:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:40:35.361 [global] 00:40:35.361 thread=1 00:40:35.361 invalidate=1 00:40:35.361 rw=randrw 00:40:35.361 time_based=1 00:40:35.361 runtime=6 00:40:35.361 ioengine=libaio 00:40:35.361 direct=1 00:40:35.361 bs=4096 00:40:35.361 iodepth=128 00:40:35.361 norandommap=0 00:40:35.361 numjobs=1 00:40:35.361 00:40:35.361 verify_dump=1 00:40:35.361 verify_backlog=512 00:40:35.361 verify_state_save=0 00:40:35.361 do_verify=1 00:40:35.361 verify=crc32c-intel 00:40:35.361 [job0] 00:40:35.361 filename=/dev/nvme0n1 00:40:35.361 Could not set queue depth (nvme0n1) 00:40:35.361 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:35.361 fio-3.35 00:40:35.361 Starting 1 thread 00:40:36.297 14:53:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:40:36.297 14:53:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:40:36.557 14:53:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:40:36.557 14:53:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:40:36.557 14:53:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:40:36.557 14:53:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:40:36.557 14:53:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:40:36.557 14:53:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:40:36.557 14:53:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:40:36.557 14:53:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:40:36.557 14:53:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:40:36.557 14:53:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:40:36.557 14:53:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:40:36.557 14:53:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:40:36.557 14:53:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:40:37.496 14:53:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:40:37.496 14:53:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:40:37.496 14:53:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:40:37.496 14:53:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:40:37.754 14:53:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:40:38.014 14:53:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:40:38.014 14:53:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:40:38.014 14:53:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:40:38.014 14:53:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:40:38.014 14:53:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:40:38.014 14:53:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:40:38.014 14:53:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:40:38.014 14:53:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:40:38.014 14:53:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:40:38.014 14:53:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:40:38.014 14:53:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:40:38.014 14:53:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:40:38.014 14:53:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:40:38.952 14:53:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:40:38.952 14:53:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:40:38.952 14:53:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:40:38.952 14:53:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 90837 00:40:41.489 00:40:41.489 job0: (groupid=0, jobs=1): err= 0: pid=90858: Mon Jul 22 14:54:01 2024 00:40:41.489 read: IOPS=13.8k, BW=53.8MiB/s (56.4MB/s)(323MiB/6006msec) 00:40:41.489 slat (usec): min=3, max=4048, avg=39.55, stdev=160.58 00:40:41.489 clat (usec): min=530, max=14203, avg=6417.68, stdev=1103.82 00:40:41.489 lat (usec): min=571, max=14215, avg=6457.23, stdev=1108.32 00:40:41.489 clat percentiles (usec): 00:40:41.489 | 1.00th=[ 3785], 5.00th=[ 4817], 10.00th=[ 5276], 20.00th=[ 5604], 00:40:41.489 | 30.00th=[ 5932], 40.00th=[ 6128], 50.00th=[ 6390], 60.00th=[ 6587], 00:40:41.489 | 70.00th=[ 6783], 80.00th=[ 7111], 90.00th=[ 7635], 95.00th=[ 8455], 00:40:41.489 | 99.00th=[ 9765], 99.50th=[10290], 99.90th=[11338], 99.95th=[11731], 00:40:41.489 | 99.99th=[13435] 00:40:41.489 bw ( KiB/s): min=14840, max=34608, per=50.85%, avg=27998.36, stdev=7068.02, samples=11 00:40:41.489 iops : min= 3710, max= 8652, avg=6999.55, stdev=1766.97, samples=11 00:40:41.489 write: IOPS=7934, BW=31.0MiB/s (32.5MB/s)(165MiB/5324msec); 0 zone resets 00:40:41.490 slat (usec): min=9, max=1630, avg=53.70, stdev=98.06 00:40:41.490 clat (usec): min=380, max=13456, avg=5444.95, stdev=998.35 00:40:41.490 lat (usec): min=499, max=13486, avg=5498.65, stdev=1000.39 00:40:41.490 clat percentiles (usec): 00:40:41.490 | 1.00th=[ 2835], 5.00th=[ 3851], 10.00th=[ 4293], 20.00th=[ 4817], 00:40:41.490 | 30.00th=[ 5080], 40.00th=[ 5276], 50.00th=[ 5473], 60.00th=[ 5604], 00:40:41.490 | 70.00th=[ 5800], 80.00th=[ 6063], 90.00th=[ 6390], 95.00th=[ 6849], 00:40:41.490 | 99.00th=[ 8717], 99.50th=[ 9503], 99.90th=[10683], 99.95th=[10945], 00:40:41.490 | 99.99th=[13435] 00:40:41.490 bw ( KiB/s): min=15400, max=35416, per=88.22%, avg=28001.36, stdev=6784.05, samples=11 00:40:41.490 iops : min= 3850, max= 8854, avg=7000.27, stdev=1695.96, samples=11 00:40:41.490 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.03% 00:40:41.490 lat (msec) : 2=0.18%, 4=2.84%, 10=96.35%, 20=0.58% 00:40:41.490 cpu : usr=5.98%, sys=33.46%, ctx=9648, majf=0, minf=84 00:40:41.490 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:40:41.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:41.490 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:41.490 issued rwts: total=82676,42245,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:41.490 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:41.490 00:40:41.490 Run status group 0 (all jobs): 00:40:41.490 READ: bw=53.8MiB/s (56.4MB/s), 53.8MiB/s-53.8MiB/s (56.4MB/s-56.4MB/s), io=323MiB (339MB), run=6006-6006msec 00:40:41.490 WRITE: bw=31.0MiB/s (32.5MB/s), 31.0MiB/s-31.0MiB/s (32.5MB/s-32.5MB/s), io=165MiB (173MB), run=5324-5324msec 00:40:41.490 00:40:41.490 Disk stats (read/write): 00:40:41.490 nvme0n1: ios=81739/41535, merge=0/0, ticks=466358/195786, in_queue=662144, util=98.63% 00:40:41.490 14:54:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:40:41.750 14:54:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:40:42.009 14:54:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:40:42.009 14:54:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:40:42.009 14:54:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:40:42.009 14:54:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:40:42.009 14:54:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:40:42.009 14:54:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:40:42.009 14:54:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:40:42.009 14:54:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:40:42.009 14:54:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:40:42.009 14:54:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:40:42.009 14:54:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:40:42.009 14:54:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:40:42.009 14:54:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:40:42.947 14:54:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:40:42.947 14:54:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:40:42.947 14:54:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:40:42.947 14:54:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:40:42.947 14:54:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=90993 00:40:42.947 14:54:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:40:42.947 14:54:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:40:42.947 [global] 00:40:42.947 thread=1 00:40:42.947 invalidate=1 00:40:42.947 rw=randrw 00:40:42.947 time_based=1 00:40:42.947 runtime=6 00:40:42.947 ioengine=libaio 00:40:42.947 direct=1 00:40:42.947 bs=4096 00:40:42.947 iodepth=128 00:40:42.947 norandommap=0 00:40:42.947 numjobs=1 00:40:42.947 00:40:42.947 verify_dump=1 00:40:42.947 verify_backlog=512 00:40:42.947 verify_state_save=0 00:40:42.947 do_verify=1 00:40:42.947 verify=crc32c-intel 00:40:42.947 [job0] 00:40:42.947 filename=/dev/nvme0n1 00:40:42.947 Could not set queue depth (nvme0n1) 00:40:43.207 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:43.207 fio-3.35 00:40:43.207 Starting 1 thread 00:40:44.151 14:54:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:40:44.151 14:54:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:40:44.415 14:54:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:40:44.415 14:54:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:40:44.416 14:54:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:40:44.416 14:54:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:40:44.416 14:54:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:40:44.416 14:54:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:40:44.416 14:54:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:40:44.416 14:54:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:40:44.416 14:54:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:40:44.416 14:54:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:40:44.416 14:54:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:40:44.416 14:54:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:40:44.416 14:54:03 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:40:45.351 14:54:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:40:45.352 14:54:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:40:45.352 14:54:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:40:45.352 14:54:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:40:45.611 14:54:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:40:45.869 14:54:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:40:45.869 14:54:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:40:45.869 14:54:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:40:45.869 14:54:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:40:45.869 14:54:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:40:45.869 14:54:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:40:45.869 14:54:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:40:45.869 14:54:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:40:45.869 14:54:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:40:45.869 14:54:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:40:45.869 14:54:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:40:45.869 14:54:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:40:45.869 14:54:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:40:46.806 14:54:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:40:46.806 14:54:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:40:46.806 14:54:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:40:46.806 14:54:06 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 90993 00:40:49.341 00:40:49.341 job0: (groupid=0, jobs=1): err= 0: pid=91014: Mon Jul 22 14:54:08 2024 00:40:49.341 read: IOPS=14.6k, BW=57.0MiB/s (59.8MB/s)(342MiB/6003msec) 00:40:49.341 slat (usec): min=2, max=5554, avg=34.12, stdev=144.34 00:40:49.341 clat (usec): min=256, max=13154, avg=6104.11, stdev=1202.84 00:40:49.341 lat (usec): min=277, max=13162, avg=6138.23, stdev=1211.44 00:40:49.341 clat percentiles (usec): 00:40:49.341 | 1.00th=[ 3294], 5.00th=[ 4178], 10.00th=[ 4686], 20.00th=[ 5211], 00:40:49.341 | 30.00th=[ 5538], 40.00th=[ 5866], 50.00th=[ 6128], 60.00th=[ 6325], 00:40:49.341 | 70.00th=[ 6587], 80.00th=[ 6915], 90.00th=[ 7439], 95.00th=[ 8094], 00:40:49.341 | 99.00th=[ 9634], 99.50th=[10290], 99.90th=[11469], 99.95th=[11994], 00:40:49.341 | 99.99th=[12780] 00:40:49.341 bw ( KiB/s): min=15680, max=42192, per=50.91%, avg=29717.09, stdev=9500.27, samples=11 00:40:49.341 iops : min= 3920, max=10548, avg=7429.27, stdev=2375.07, samples=11 00:40:49.341 write: IOPS=8726, BW=34.1MiB/s (35.7MB/s)(174MiB/5103msec); 0 zone resets 00:40:49.341 slat (usec): min=3, max=3002, avg=47.69, stdev=88.26 00:40:49.341 clat (usec): min=324, max=11766, avg=5080.69, stdev=1160.08 00:40:49.341 lat (usec): min=419, max=11780, avg=5128.38, stdev=1167.30 00:40:49.341 clat percentiles (usec): 00:40:49.341 | 1.00th=[ 2474], 5.00th=[ 3195], 10.00th=[ 3556], 20.00th=[ 4113], 00:40:49.341 | 30.00th=[ 4555], 40.00th=[ 4883], 50.00th=[ 5211], 60.00th=[ 5407], 00:40:49.341 | 70.00th=[ 5604], 80.00th=[ 5866], 90.00th=[ 6259], 95.00th=[ 6718], 00:40:49.341 | 99.00th=[ 8455], 99.50th=[ 9110], 99.90th=[10290], 99.95th=[10814], 00:40:49.341 | 99.99th=[11600] 00:40:49.341 bw ( KiB/s): min=16584, max=41856, per=85.40%, avg=29810.91, stdev=9182.40, samples=11 00:40:49.341 iops : min= 4146, max=10464, avg=7452.73, stdev=2295.60, samples=11 00:40:49.341 lat (usec) : 500=0.02%, 750=0.03%, 1000=0.05% 00:40:49.341 lat (msec) : 2=0.24%, 4=8.20%, 10=90.98%, 20=0.50% 00:40:49.341 cpu : usr=6.61%, sys=32.96%, ctx=10034, majf=0, minf=133 00:40:49.341 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:40:49.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:49.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:49.341 issued rwts: total=87597,44533,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:49.341 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:49.341 00:40:49.341 Run status group 0 (all jobs): 00:40:49.341 READ: bw=57.0MiB/s (59.8MB/s), 57.0MiB/s-57.0MiB/s (59.8MB/s-59.8MB/s), io=342MiB (359MB), run=6003-6003msec 00:40:49.341 WRITE: bw=34.1MiB/s (35.7MB/s), 34.1MiB/s-34.1MiB/s (35.7MB/s-35.7MB/s), io=174MiB (182MB), run=5103-5103msec 00:40:49.341 00:40:49.341 Disk stats (read/write): 00:40:49.341 nvme0n1: ios=86522/43780, merge=0/0, ticks=472367/192544, in_queue=664911, util=98.46% 00:40:49.341 14:54:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:49.341 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:40:49.341 14:54:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:49.341 14:54:08 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1215 -- # local i=0 00:40:49.341 14:54:08 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:40:49.341 14:54:08 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:49.341 14:54:08 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:40:49.341 14:54:08 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:49.341 14:54:08 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # return 0 00:40:49.341 14:54:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:49.601 14:54:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:40:49.601 14:54:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:40:49.601 14:54:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:40:49.601 14:54:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:40:49.601 14:54:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:40:49.601 14:54:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:40:49.601 14:54:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:40:49.601 14:54:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:40:49.601 14:54:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:40:49.601 14:54:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:40:49.601 rmmod nvme_tcp 00:40:49.601 rmmod nvme_fabrics 00:40:49.601 rmmod nvme_keyring 00:40:49.601 14:54:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:40:49.601 14:54:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:40:49.601 14:54:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:40:49.601 14:54:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 90705 ']' 00:40:49.601 14:54:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 90705 00:40:49.601 14:54:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@946 -- # '[' -z 90705 ']' 00:40:49.601 14:54:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@950 -- # kill -0 90705 00:40:49.601 14:54:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@951 -- # uname 00:40:49.860 14:54:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:40:49.860 14:54:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 90705 00:40:49.860 killing process with pid 90705 00:40:49.860 14:54:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:40:49.860 14:54:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:40:49.860 14:54:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@964 -- # echo 'killing process with pid 90705' 00:40:49.860 14:54:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@965 -- # kill 90705 00:40:49.860 14:54:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@970 -- # wait 90705 00:40:49.860 14:54:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:40:49.860 14:54:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:40:49.860 14:54:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:40:49.860 14:54:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:40:49.860 14:54:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:40:49.860 14:54:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:49.860 14:54:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:49.860 14:54:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:50.120 14:54:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:40:50.120 00:40:50.120 real 0m19.969s 00:40:50.120 user 1m18.292s 00:40:50.120 sys 0m6.901s 00:40:50.120 14:54:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:50.120 14:54:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:40:50.120 ************************************ 00:40:50.120 END TEST nvmf_target_multipath 00:40:50.120 ************************************ 00:40:50.120 14:54:09 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:40:50.120 14:54:09 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:40:50.120 14:54:09 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:40:50.120 14:54:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:50.120 ************************************ 00:40:50.120 START TEST nvmf_zcopy 00:40:50.120 ************************************ 00:40:50.120 14:54:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:40:50.120 * Looking for test storage... 00:40:50.120 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:40:50.120 14:54:09 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:40:50.120 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:40:50.120 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:50.120 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:50.120 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:50.120 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:50.120 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:50.120 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:50.121 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:50.121 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:50.121 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:50.121 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:50.121 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:40:50.121 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:40:50.121 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:50.121 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:50.121 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:40:50.121 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:50.121 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:40:50.121 14:54:09 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:50.121 14:54:09 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:50.121 14:54:09 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:50.121 14:54:09 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:50.121 14:54:09 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:50.121 14:54:09 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:50.121 14:54:09 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:40:50.121 14:54:09 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:50.121 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:40:50.121 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:50.121 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:50.121 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:50.121 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:50.121 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:50.121 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:50.121 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:50.121 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:50.381 14:54:09 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:40:50.381 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:40:50.381 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:50.381 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:40:50.381 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:40:50.381 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:40:50.381 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:50.381 14:54:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:50.381 14:54:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:50.381 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:40:50.381 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:40:50.381 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:40:50.381 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:40:50.381 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:40:50.381 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:40:50.381 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:50.381 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:50.381 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:40:50.381 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:40:50.381 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:40:50.381 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:40:50.381 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:40:50.381 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:50.381 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:40:50.381 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:40:50.381 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:40:50.381 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:40:50.381 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:40:50.381 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:40:50.381 Cannot find device "nvmf_tgt_br" 00:40:50.381 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:40:50.381 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:40:50.381 Cannot find device "nvmf_tgt_br2" 00:40:50.381 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:40:50.381 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:40:50.381 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:40:50.381 Cannot find device "nvmf_tgt_br" 00:40:50.381 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:40:50.381 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:40:50.381 Cannot find device "nvmf_tgt_br2" 00:40:50.381 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:40:50.381 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:40:50.381 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:40:50.381 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:40:50.381 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:40:50.381 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:40:50.381 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:40:50.381 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:40:50.381 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:40:50.381 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:40:50.381 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:40:50.381 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:40:50.381 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:40:50.381 14:54:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:40:50.381 14:54:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:40:50.641 14:54:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:40:50.641 14:54:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:40:50.641 14:54:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:40:50.641 14:54:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:40:50.641 14:54:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:40:50.641 14:54:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:40:50.641 14:54:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:40:50.641 14:54:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:40:50.641 14:54:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:40:50.641 14:54:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:40:50.641 14:54:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:40:50.641 14:54:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:40:50.641 14:54:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:40:50.641 14:54:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:40:50.641 14:54:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:40:50.641 14:54:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:40:50.641 14:54:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:40:50.641 14:54:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:40:50.641 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:50.641 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:40:50.641 00:40:50.641 --- 10.0.0.2 ping statistics --- 00:40:50.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:50.641 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:40:50.641 14:54:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:40:50.641 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:40:50.641 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:40:50.641 00:40:50.641 --- 10.0.0.3 ping statistics --- 00:40:50.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:50.641 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:40:50.641 14:54:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:40:50.641 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:50.641 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:40:50.641 00:40:50.641 --- 10.0.0.1 ping statistics --- 00:40:50.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:50.641 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:40:50.641 14:54:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:50.641 14:54:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:40:50.641 14:54:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:40:50.641 14:54:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:50.641 14:54:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:40:50.641 14:54:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:40:50.641 14:54:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:50.641 14:54:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:40:50.641 14:54:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:40:50.641 14:54:10 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:40:50.641 14:54:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:40:50.641 14:54:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:40:50.641 14:54:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:50.641 14:54:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=91292 00:40:50.641 14:54:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:40:50.641 14:54:10 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 91292 00:40:50.641 14:54:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 91292 ']' 00:40:50.641 14:54:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:50.641 14:54:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:40:50.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:50.641 14:54:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:50.641 14:54:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:40:50.641 14:54:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:50.641 [2024-07-22 14:54:10.252284] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:40:50.641 [2024-07-22 14:54:10.252344] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:50.901 [2024-07-22 14:54:10.375835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:50.901 [2024-07-22 14:54:10.425469] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:50.901 [2024-07-22 14:54:10.425522] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:50.901 [2024-07-22 14:54:10.425529] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:50.901 [2024-07-22 14:54:10.425535] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:50.901 [2024-07-22 14:54:10.425540] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:50.901 [2024-07-22 14:54:10.425558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:40:51.847 14:54:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:40:51.847 14:54:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:40:51.847 14:54:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:40:51.847 14:54:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:51.847 14:54:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:51.847 14:54:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:51.847 14:54:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:40:51.847 14:54:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:40:51.847 14:54:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:51.847 14:54:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:51.847 [2024-07-22 14:54:11.197482] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:51.847 14:54:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:51.847 14:54:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:40:51.847 14:54:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:51.847 14:54:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:51.847 14:54:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:51.847 14:54:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:51.847 14:54:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:51.847 14:54:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:51.847 [2024-07-22 14:54:11.221527] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:51.847 14:54:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:51.847 14:54:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:51.847 14:54:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:51.847 14:54:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:51.848 14:54:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:51.848 14:54:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:40:51.848 14:54:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:51.848 14:54:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:51.848 malloc0 00:40:51.848 14:54:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:51.848 14:54:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:40:51.848 14:54:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:51.848 14:54:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:51.848 14:54:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:51.848 14:54:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:40:51.848 14:54:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:40:51.848 14:54:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:40:51.848 14:54:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:40:51.848 14:54:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:40:51.848 14:54:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:40:51.848 { 00:40:51.848 "params": { 00:40:51.848 "name": "Nvme$subsystem", 00:40:51.848 "trtype": "$TEST_TRANSPORT", 00:40:51.848 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:51.848 "adrfam": "ipv4", 00:40:51.848 "trsvcid": "$NVMF_PORT", 00:40:51.848 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:51.848 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:51.848 "hdgst": ${hdgst:-false}, 00:40:51.848 "ddgst": ${ddgst:-false} 00:40:51.848 }, 00:40:51.848 "method": "bdev_nvme_attach_controller" 00:40:51.848 } 00:40:51.848 EOF 00:40:51.848 )") 00:40:51.848 14:54:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:40:51.848 14:54:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:40:51.848 14:54:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:40:51.848 14:54:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:40:51.848 "params": { 00:40:51.848 "name": "Nvme1", 00:40:51.848 "trtype": "tcp", 00:40:51.848 "traddr": "10.0.0.2", 00:40:51.848 "adrfam": "ipv4", 00:40:51.848 "trsvcid": "4420", 00:40:51.848 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:51.848 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:51.848 "hdgst": false, 00:40:51.848 "ddgst": false 00:40:51.848 }, 00:40:51.848 "method": "bdev_nvme_attach_controller" 00:40:51.848 }' 00:40:51.848 [2024-07-22 14:54:11.322813] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:40:51.848 [2024-07-22 14:54:11.322869] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91343 ] 00:40:51.848 [2024-07-22 14:54:11.462651] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:52.120 [2024-07-22 14:54:11.515437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:52.121 Running I/O for 10 seconds... 00:41:02.150 00:41:02.150 Latency(us) 00:41:02.150 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:02.150 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:41:02.150 Verification LBA range: start 0x0 length 0x1000 00:41:02.150 Nvme1n1 : 10.01 8039.15 62.81 0.00 0.00 15876.12 2103.45 31136.75 00:41:02.150 =================================================================================================================== 00:41:02.150 Total : 8039.15 62.81 0.00 0.00 15876.12 2103.45 31136.75 00:41:02.409 14:54:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=91460 00:41:02.409 14:54:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:41:02.409 14:54:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:41:02.409 14:54:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:41:02.409 14:54:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:41:02.409 14:54:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:41:02.409 14:54:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:41:02.409 14:54:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:41:02.409 { 00:41:02.409 "params": { 00:41:02.409 "name": "Nvme$subsystem", 00:41:02.409 "trtype": "$TEST_TRANSPORT", 00:41:02.409 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:02.409 "adrfam": "ipv4", 00:41:02.409 "trsvcid": "$NVMF_PORT", 00:41:02.409 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:02.409 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:02.409 "hdgst": ${hdgst:-false}, 00:41:02.409 "ddgst": ${ddgst:-false} 00:41:02.409 }, 00:41:02.409 "method": "bdev_nvme_attach_controller" 00:41:02.409 } 00:41:02.409 EOF 00:41:02.409 )") 00:41:02.409 14:54:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:02.409 14:54:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:41:02.409 14:54:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:41:02.409 [2024-07-22 14:54:21.856726] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.409 [2024-07-22 14:54:21.856763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.409 14:54:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:41:02.409 14:54:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:41:02.409 "params": { 00:41:02.409 "name": "Nvme1", 00:41:02.409 "trtype": "tcp", 00:41:02.409 "traddr": "10.0.0.2", 00:41:02.409 "adrfam": "ipv4", 00:41:02.409 "trsvcid": "4420", 00:41:02.409 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:02.409 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:02.409 "hdgst": false, 00:41:02.409 "ddgst": false 00:41:02.409 }, 00:41:02.409 "method": "bdev_nvme_attach_controller" 00:41:02.409 }' 00:41:02.409 2024/07/22 14:54:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:02.409 [2024-07-22 14:54:21.868698] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.409 [2024-07-22 14:54:21.868728] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.409 2024/07/22 14:54:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:02.409 [2024-07-22 14:54:21.880669] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.409 [2024-07-22 14:54:21.880701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.409 2024/07/22 14:54:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:02.409 [2024-07-22 14:54:21.892659] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.409 [2024-07-22 14:54:21.892691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.409 [2024-07-22 14:54:21.895511] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:41:02.409 [2024-07-22 14:54:21.895563] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91460 ] 00:41:02.409 2024/07/22 14:54:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:02.409 [2024-07-22 14:54:21.904628] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.409 [2024-07-22 14:54:21.904654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.409 2024/07/22 14:54:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:02.409 [2024-07-22 14:54:21.916619] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.409 [2024-07-22 14:54:21.916647] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.409 2024/07/22 14:54:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:02.409 [2024-07-22 14:54:21.928568] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.409 [2024-07-22 14:54:21.928607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.409 2024/07/22 14:54:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:02.409 [2024-07-22 14:54:21.940551] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.409 [2024-07-22 14:54:21.940576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.409 2024/07/22 14:54:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:02.409 [2024-07-22 14:54:21.952514] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.409 [2024-07-22 14:54:21.952534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.409 2024/07/22 14:54:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:02.409 [2024-07-22 14:54:21.964491] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.409 [2024-07-22 14:54:21.964510] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.410 2024/07/22 14:54:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:02.410 [2024-07-22 14:54:21.976468] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.410 [2024-07-22 14:54:21.976486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.410 2024/07/22 14:54:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:02.410 [2024-07-22 14:54:21.988445] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.410 [2024-07-22 14:54:21.988464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.410 2024/07/22 14:54:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:02.410 [2024-07-22 14:54:22.000425] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.410 [2024-07-22 14:54:22.000443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.410 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:02.410 [2024-07-22 14:54:22.012407] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.410 [2024-07-22 14:54:22.012425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.410 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:02.410 [2024-07-22 14:54:22.024393] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.410 [2024-07-22 14:54:22.024415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.410 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:02.410 [2024-07-22 14:54:22.033624] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:02.410 [2024-07-22 14:54:22.036384] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.410 [2024-07-22 14:54:22.036409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.669 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:02.669 [2024-07-22 14:54:22.048356] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.669 [2024-07-22 14:54:22.048381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.669 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:02.669 [2024-07-22 14:54:22.060334] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.669 [2024-07-22 14:54:22.060358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.669 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:02.669 [2024-07-22 14:54:22.072336] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.669 [2024-07-22 14:54:22.072362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.669 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:02.669 [2024-07-22 14:54:22.084311] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.669 [2024-07-22 14:54:22.084335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.669 [2024-07-22 14:54:22.085616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:41:02.669 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:02.669 [2024-07-22 14:54:22.096278] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.669 [2024-07-22 14:54:22.096301] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.669 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:02.669 [2024-07-22 14:54:22.108286] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.669 [2024-07-22 14:54:22.108314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.669 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:02.669 [2024-07-22 14:54:22.120244] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.669 [2024-07-22 14:54:22.120271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.669 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:02.669 [2024-07-22 14:54:22.132244] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.669 [2024-07-22 14:54:22.132271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.669 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:02.669 [2024-07-22 14:54:22.144207] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.669 [2024-07-22 14:54:22.144233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.669 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:02.669 [2024-07-22 14:54:22.156179] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.669 [2024-07-22 14:54:22.156205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.669 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:02.669 [2024-07-22 14:54:22.168162] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.669 [2024-07-22 14:54:22.168187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.670 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:02.670 [2024-07-22 14:54:22.180133] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.670 [2024-07-22 14:54:22.180154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.670 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:02.670 [2024-07-22 14:54:22.192114] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.670 [2024-07-22 14:54:22.192135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.670 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:02.670 [2024-07-22 14:54:22.204092] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.670 [2024-07-22 14:54:22.204112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.670 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:02.670 [2024-07-22 14:54:22.216089] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.670 [2024-07-22 14:54:22.216110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.670 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:02.670 [2024-07-22 14:54:22.228119] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.670 [2024-07-22 14:54:22.228142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.670 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:02.670 Running I/O for 5 seconds... 00:41:02.670 [2024-07-22 14:54:22.240108] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.670 [2024-07-22 14:54:22.240127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.670 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:02.670 [2024-07-22 14:54:22.255165] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.670 [2024-07-22 14:54:22.255195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.670 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:02.670 [2024-07-22 14:54:22.270832] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.670 [2024-07-22 14:54:22.270860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.670 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:02.670 [2024-07-22 14:54:22.285208] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.670 [2024-07-22 14:54:22.285235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.670 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:02.929 [2024-07-22 14:54:22.299760] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.929 [2024-07-22 14:54:22.299787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.929 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:02.929 [2024-07-22 14:54:22.310283] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.929 [2024-07-22 14:54:22.310309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.929 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:02.929 [2024-07-22 14:54:22.324516] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.929 [2024-07-22 14:54:22.324542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.930 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:02.930 [2024-07-22 14:54:22.338146] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.930 [2024-07-22 14:54:22.338171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.930 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:02.930 [2024-07-22 14:54:22.351968] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.930 [2024-07-22 14:54:22.351993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.930 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:02.930 [2024-07-22 14:54:22.365552] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.930 [2024-07-22 14:54:22.365579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.930 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:02.930 [2024-07-22 14:54:22.379317] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.930 [2024-07-22 14:54:22.379342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.930 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:02.930 [2024-07-22 14:54:22.393028] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.930 [2024-07-22 14:54:22.393054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.930 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:02.930 [2024-07-22 14:54:22.406932] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.930 [2024-07-22 14:54:22.406956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.930 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:02.930 [2024-07-22 14:54:22.420747] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.930 [2024-07-22 14:54:22.420772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.930 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:02.930 [2024-07-22 14:54:22.434928] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.930 [2024-07-22 14:54:22.434957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.930 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:02.930 [2024-07-22 14:54:22.449245] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.930 [2024-07-22 14:54:22.449272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.930 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:02.930 [2024-07-22 14:54:22.460012] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.930 [2024-07-22 14:54:22.460038] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.930 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:02.930 [2024-07-22 14:54:22.475291] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.930 [2024-07-22 14:54:22.475317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.930 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:02.930 [2024-07-22 14:54:22.488833] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.930 [2024-07-22 14:54:22.488859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.930 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:02.930 [2024-07-22 14:54:22.504071] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.930 [2024-07-22 14:54:22.504099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.930 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:02.930 [2024-07-22 14:54:22.519778] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.930 [2024-07-22 14:54:22.519806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.930 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:02.930 [2024-07-22 14:54:22.534113] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.930 [2024-07-22 14:54:22.534143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.930 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:02.930 [2024-07-22 14:54:22.547888] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:02.930 [2024-07-22 14:54:22.547918] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:02.930 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.190 [2024-07-22 14:54:22.562455] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.190 [2024-07-22 14:54:22.562481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.191 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.191 [2024-07-22 14:54:22.573239] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.191 [2024-07-22 14:54:22.573265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.191 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.191 [2024-07-22 14:54:22.588290] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.191 [2024-07-22 14:54:22.588316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.191 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.191 [2024-07-22 14:54:22.599192] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.191 [2024-07-22 14:54:22.599219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.191 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.191 [2024-07-22 14:54:22.613477] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.191 [2024-07-22 14:54:22.613502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.191 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.191 [2024-07-22 14:54:22.627643] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.191 [2024-07-22 14:54:22.627679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.191 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.191 [2024-07-22 14:54:22.638804] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.191 [2024-07-22 14:54:22.638833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.191 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.191 [2024-07-22 14:54:22.654148] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.191 [2024-07-22 14:54:22.654176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.191 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.191 [2024-07-22 14:54:22.670019] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.191 [2024-07-22 14:54:22.670047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.191 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.191 [2024-07-22 14:54:22.685174] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.191 [2024-07-22 14:54:22.685204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.191 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.191 [2024-07-22 14:54:22.700648] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.191 [2024-07-22 14:54:22.700685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.191 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.191 [2024-07-22 14:54:22.714604] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.191 [2024-07-22 14:54:22.714630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.191 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.191 [2024-07-22 14:54:22.729805] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.191 [2024-07-22 14:54:22.729830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.191 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.191 [2024-07-22 14:54:22.745468] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.191 [2024-07-22 14:54:22.745496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.191 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.191 [2024-07-22 14:54:22.760054] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.191 [2024-07-22 14:54:22.760080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.191 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.191 [2024-07-22 14:54:22.770710] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.191 [2024-07-22 14:54:22.770735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.191 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.191 [2024-07-22 14:54:22.787056] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.191 [2024-07-22 14:54:22.787084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.191 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.191 [2024-07-22 14:54:22.802238] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.191 [2024-07-22 14:54:22.802265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.191 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.191 [2024-07-22 14:54:22.817054] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.191 [2024-07-22 14:54:22.817079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.191 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.451 [2024-07-22 14:54:22.828284] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.451 [2024-07-22 14:54:22.828309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.451 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.451 [2024-07-22 14:54:22.842523] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.451 [2024-07-22 14:54:22.842548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.451 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.451 [2024-07-22 14:54:22.856243] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.451 [2024-07-22 14:54:22.856268] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.451 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.451 [2024-07-22 14:54:22.870430] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.451 [2024-07-22 14:54:22.870456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.451 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.451 [2024-07-22 14:54:22.884332] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.451 [2024-07-22 14:54:22.884357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.451 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.451 [2024-07-22 14:54:22.897998] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.451 [2024-07-22 14:54:22.898022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.451 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.451 [2024-07-22 14:54:22.912147] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.451 [2024-07-22 14:54:22.912171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.452 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.452 [2024-07-22 14:54:22.926147] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.452 [2024-07-22 14:54:22.926171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.452 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.452 [2024-07-22 14:54:22.939711] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.452 [2024-07-22 14:54:22.939736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.452 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.452 [2024-07-22 14:54:22.953185] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.452 [2024-07-22 14:54:22.953212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.452 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.452 [2024-07-22 14:54:22.966480] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.452 [2024-07-22 14:54:22.966505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.452 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.452 [2024-07-22 14:54:22.980723] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.452 [2024-07-22 14:54:22.980748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.452 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.452 [2024-07-22 14:54:22.991724] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.452 [2024-07-22 14:54:22.991747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.452 2024/07/22 14:54:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.452 [2024-07-22 14:54:23.006720] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.452 [2024-07-22 14:54:23.006749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.452 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.452 [2024-07-22 14:54:23.017646] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.452 [2024-07-22 14:54:23.017683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.452 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.452 [2024-07-22 14:54:23.032828] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.452 [2024-07-22 14:54:23.032856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.452 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.452 [2024-07-22 14:54:23.048552] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.452 [2024-07-22 14:54:23.048606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.452 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.452 [2024-07-22 14:54:23.063238] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.452 [2024-07-22 14:54:23.063266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.452 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.452 [2024-07-22 14:54:23.078542] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.452 [2024-07-22 14:54:23.078575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.726 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.726 [2024-07-22 14:54:23.094161] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.726 [2024-07-22 14:54:23.094193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.726 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.726 [2024-07-22 14:54:23.108884] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.726 [2024-07-22 14:54:23.108914] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.727 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.727 [2024-07-22 14:54:23.119247] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.727 [2024-07-22 14:54:23.119276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.727 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.727 [2024-07-22 14:54:23.133807] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.727 [2024-07-22 14:54:23.133847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.727 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.727 [2024-07-22 14:54:23.148437] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.727 [2024-07-22 14:54:23.148464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.727 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.727 [2024-07-22 14:54:23.163973] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.727 [2024-07-22 14:54:23.164000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.727 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.727 [2024-07-22 14:54:23.178590] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.727 [2024-07-22 14:54:23.178618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.727 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.727 [2024-07-22 14:54:23.189268] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.727 [2024-07-22 14:54:23.189294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.727 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.727 [2024-07-22 14:54:23.204130] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.727 [2024-07-22 14:54:23.204172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.727 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.727 [2024-07-22 14:54:23.218263] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.727 [2024-07-22 14:54:23.218292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.727 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.727 [2024-07-22 14:54:23.233229] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.727 [2024-07-22 14:54:23.233257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.727 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.727 [2024-07-22 14:54:23.244989] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.727 [2024-07-22 14:54:23.245018] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.727 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.727 [2024-07-22 14:54:23.260158] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.727 [2024-07-22 14:54:23.260192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.727 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.727 [2024-07-22 14:54:23.275859] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.727 [2024-07-22 14:54:23.275890] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.727 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.727 [2024-07-22 14:54:23.289165] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.727 [2024-07-22 14:54:23.289196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.727 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.727 [2024-07-22 14:54:23.304037] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.727 [2024-07-22 14:54:23.304062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.727 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.727 [2024-07-22 14:54:23.317984] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.727 [2024-07-22 14:54:23.318008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.727 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.727 [2024-07-22 14:54:23.331526] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.727 [2024-07-22 14:54:23.331551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.727 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.727 [2024-07-22 14:54:23.345511] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.727 [2024-07-22 14:54:23.345537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.727 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.986 [2024-07-22 14:54:23.359626] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.986 [2024-07-22 14:54:23.359652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.986 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.986 [2024-07-22 14:54:23.373059] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.986 [2024-07-22 14:54:23.373085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.986 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.986 [2024-07-22 14:54:23.387070] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.986 [2024-07-22 14:54:23.387095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.986 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.986 [2024-07-22 14:54:23.400627] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.986 [2024-07-22 14:54:23.400652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.986 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.986 [2024-07-22 14:54:23.414723] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.986 [2024-07-22 14:54:23.414748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.987 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.987 [2024-07-22 14:54:23.428217] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.987 [2024-07-22 14:54:23.428242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.987 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.987 [2024-07-22 14:54:23.441796] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.987 [2024-07-22 14:54:23.441820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.987 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.987 [2024-07-22 14:54:23.455892] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.987 [2024-07-22 14:54:23.455917] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.987 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.987 [2024-07-22 14:54:23.470487] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.987 [2024-07-22 14:54:23.470516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.987 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.987 [2024-07-22 14:54:23.482133] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.987 [2024-07-22 14:54:23.482161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.987 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.987 [2024-07-22 14:54:23.497680] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.987 [2024-07-22 14:54:23.497707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.987 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.987 [2024-07-22 14:54:23.508612] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.987 [2024-07-22 14:54:23.508638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.987 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.987 [2024-07-22 14:54:23.523911] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.987 [2024-07-22 14:54:23.523936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.987 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.987 [2024-07-22 14:54:23.539344] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.987 [2024-07-22 14:54:23.539370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.987 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.987 [2024-07-22 14:54:23.554015] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.987 [2024-07-22 14:54:23.554056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.987 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.987 [2024-07-22 14:54:23.565612] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.987 [2024-07-22 14:54:23.565639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.987 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.987 [2024-07-22 14:54:23.580048] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.987 [2024-07-22 14:54:23.580080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.987 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.987 [2024-07-22 14:54:23.593738] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.987 [2024-07-22 14:54:23.593770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.987 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:03.987 [2024-07-22 14:54:23.608645] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:03.987 [2024-07-22 14:54:23.608697] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:03.987 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.247 [2024-07-22 14:54:23.619613] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.247 [2024-07-22 14:54:23.619645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.247 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.247 [2024-07-22 14:54:23.634515] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.247 [2024-07-22 14:54:23.634548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.247 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.247 [2024-07-22 14:54:23.645680] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.247 [2024-07-22 14:54:23.645707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.247 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.247 [2024-07-22 14:54:23.660974] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.247 [2024-07-22 14:54:23.661001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.247 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.247 [2024-07-22 14:54:23.676507] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.247 [2024-07-22 14:54:23.676535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.247 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.247 [2024-07-22 14:54:23.691850] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.247 [2024-07-22 14:54:23.691877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.247 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.247 [2024-07-22 14:54:23.707339] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.247 [2024-07-22 14:54:23.707365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.247 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.247 [2024-07-22 14:54:23.721100] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.247 [2024-07-22 14:54:23.721128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.247 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.247 [2024-07-22 14:54:23.736065] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.247 [2024-07-22 14:54:23.736092] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.247 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.247 [2024-07-22 14:54:23.750083] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.248 [2024-07-22 14:54:23.750110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.248 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.248 [2024-07-22 14:54:23.764263] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.248 [2024-07-22 14:54:23.764288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.248 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.248 [2024-07-22 14:54:23.778368] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.248 [2024-07-22 14:54:23.778394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.248 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.248 [2024-07-22 14:54:23.792206] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.248 [2024-07-22 14:54:23.792231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.248 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.248 [2024-07-22 14:54:23.806624] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.248 [2024-07-22 14:54:23.806649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.248 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.248 [2024-07-22 14:54:23.817510] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.248 [2024-07-22 14:54:23.817535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.248 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.248 [2024-07-22 14:54:23.832331] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.248 [2024-07-22 14:54:23.832357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.248 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.248 [2024-07-22 14:54:23.847841] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.248 [2024-07-22 14:54:23.847866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.248 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.248 [2024-07-22 14:54:23.862453] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.248 [2024-07-22 14:54:23.862480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.248 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.508 [2024-07-22 14:54:23.877034] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.508 [2024-07-22 14:54:23.877062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.508 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.508 [2024-07-22 14:54:23.887716] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.508 [2024-07-22 14:54:23.887742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.508 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.508 [2024-07-22 14:54:23.902072] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.508 [2024-07-22 14:54:23.902099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.508 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.508 [2024-07-22 14:54:23.915811] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.508 [2024-07-22 14:54:23.915838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.508 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.508 [2024-07-22 14:54:23.930119] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.508 [2024-07-22 14:54:23.930149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.508 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.508 [2024-07-22 14:54:23.940941] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.508 [2024-07-22 14:54:23.940970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.508 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.508 [2024-07-22 14:54:23.955644] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.508 [2024-07-22 14:54:23.955682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.508 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.508 [2024-07-22 14:54:23.969384] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.508 [2024-07-22 14:54:23.969412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.508 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.508 [2024-07-22 14:54:23.983636] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.508 [2024-07-22 14:54:23.983665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.508 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.508 [2024-07-22 14:54:23.994284] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.508 [2024-07-22 14:54:23.994312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.508 2024/07/22 14:54:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.508 [2024-07-22 14:54:24.009068] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.508 [2024-07-22 14:54:24.009094] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.508 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.508 [2024-07-22 14:54:24.020525] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.508 [2024-07-22 14:54:24.020555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.508 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.508 [2024-07-22 14:54:24.035867] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.508 [2024-07-22 14:54:24.035898] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.508 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.508 [2024-07-22 14:54:24.051239] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.508 [2024-07-22 14:54:24.051279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.508 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.509 [2024-07-22 14:54:24.066614] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.509 [2024-07-22 14:54:24.066642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.509 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.509 [2024-07-22 14:54:24.081643] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.509 [2024-07-22 14:54:24.081683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.509 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.509 [2024-07-22 14:54:24.095742] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.509 [2024-07-22 14:54:24.095766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.509 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.509 [2024-07-22 14:54:24.110544] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.509 [2024-07-22 14:54:24.110571] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.509 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.509 [2024-07-22 14:54:24.124832] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.509 [2024-07-22 14:54:24.124857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.509 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.768 [2024-07-22 14:54:24.138846] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.768 [2024-07-22 14:54:24.138872] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.768 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.768 [2024-07-22 14:54:24.152847] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.768 [2024-07-22 14:54:24.152873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.768 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.768 [2024-07-22 14:54:24.166711] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.768 [2024-07-22 14:54:24.166737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.768 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.768 [2024-07-22 14:54:24.180719] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.768 [2024-07-22 14:54:24.180745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.768 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.768 [2024-07-22 14:54:24.194715] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.768 [2024-07-22 14:54:24.194750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.768 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.768 [2024-07-22 14:54:24.209090] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.768 [2024-07-22 14:54:24.209117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.768 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.768 [2024-07-22 14:54:24.223129] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.768 [2024-07-22 14:54:24.223155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.768 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.768 [2024-07-22 14:54:24.237103] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.768 [2024-07-22 14:54:24.237129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.768 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.768 [2024-07-22 14:54:24.251034] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.768 [2024-07-22 14:54:24.251058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.768 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.768 [2024-07-22 14:54:24.265084] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.768 [2024-07-22 14:54:24.265112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.768 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.768 [2024-07-22 14:54:24.279129] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.768 [2024-07-22 14:54:24.279162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.768 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.768 [2024-07-22 14:54:24.293641] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.768 [2024-07-22 14:54:24.293684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.768 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.768 [2024-07-22 14:54:24.308202] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.768 [2024-07-22 14:54:24.308233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.768 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.768 [2024-07-22 14:54:24.323147] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.768 [2024-07-22 14:54:24.323176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.768 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.768 [2024-07-22 14:54:24.334294] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.768 [2024-07-22 14:54:24.334323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.768 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.768 [2024-07-22 14:54:24.350933] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.768 [2024-07-22 14:54:24.350963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.768 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.768 [2024-07-22 14:54:24.366528] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.768 [2024-07-22 14:54:24.366558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.768 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.768 [2024-07-22 14:54:24.382606] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.768 [2024-07-22 14:54:24.382636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:04.768 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:04.768 [2024-07-22 14:54:24.394325] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:04.768 [2024-07-22 14:54:24.394354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.029 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.029 [2024-07-22 14:54:24.410122] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.029 [2024-07-22 14:54:24.410152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.029 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.029 [2024-07-22 14:54:24.425161] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.029 [2024-07-22 14:54:24.425190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.029 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.029 [2024-07-22 14:54:24.439635] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.029 [2024-07-22 14:54:24.439664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.029 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.029 [2024-07-22 14:54:24.454410] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.029 [2024-07-22 14:54:24.454438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.029 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.029 [2024-07-22 14:54:24.465135] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.029 [2024-07-22 14:54:24.465163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.029 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.029 [2024-07-22 14:54:24.480307] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.029 [2024-07-22 14:54:24.480336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.029 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.029 [2024-07-22 14:54:24.491728] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.029 [2024-07-22 14:54:24.491754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.029 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.029 [2024-07-22 14:54:24.506678] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.029 [2024-07-22 14:54:24.506720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.029 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.029 [2024-07-22 14:54:24.522431] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.029 [2024-07-22 14:54:24.522463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.029 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.029 [2024-07-22 14:54:24.537037] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.029 [2024-07-22 14:54:24.537070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.029 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.029 [2024-07-22 14:54:24.548410] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.029 [2024-07-22 14:54:24.548442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.029 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.029 [2024-07-22 14:54:24.563991] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.029 [2024-07-22 14:54:24.564023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.029 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.029 [2024-07-22 14:54:24.579988] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.029 [2024-07-22 14:54:24.580019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.029 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.029 [2024-07-22 14:54:24.595270] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.029 [2024-07-22 14:54:24.595299] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.029 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.029 [2024-07-22 14:54:24.609271] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.029 [2024-07-22 14:54:24.609301] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.029 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.029 [2024-07-22 14:54:24.623749] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.029 [2024-07-22 14:54:24.623779] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.029 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.029 [2024-07-22 14:54:24.639025] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.029 [2024-07-22 14:54:24.639054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.029 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.029 [2024-07-22 14:54:24.654435] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.030 [2024-07-22 14:54:24.654465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.030 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.289 [2024-07-22 14:54:24.669299] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.289 [2024-07-22 14:54:24.669328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.289 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.290 [2024-07-22 14:54:24.679989] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.290 [2024-07-22 14:54:24.680016] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.290 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.290 [2024-07-22 14:54:24.694886] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.290 [2024-07-22 14:54:24.694915] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.290 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.290 [2024-07-22 14:54:24.710509] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.290 [2024-07-22 14:54:24.710541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.290 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.290 [2024-07-22 14:54:24.720204] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.290 [2024-07-22 14:54:24.720234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.290 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.290 [2024-07-22 14:54:24.727497] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.290 [2024-07-22 14:54:24.727526] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.290 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.290 [2024-07-22 14:54:24.738730] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.290 [2024-07-22 14:54:24.738765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.290 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.290 [2024-07-22 14:54:24.748497] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.290 [2024-07-22 14:54:24.748530] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.290 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.290 [2024-07-22 14:54:24.756025] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.290 [2024-07-22 14:54:24.756056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.290 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.290 [2024-07-22 14:54:24.766911] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.290 [2024-07-22 14:54:24.766940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.290 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.290 [2024-07-22 14:54:24.776368] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.290 [2024-07-22 14:54:24.776398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.290 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.290 [2024-07-22 14:54:24.785848] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.290 [2024-07-22 14:54:24.785876] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.290 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.290 [2024-07-22 14:54:24.793161] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.290 [2024-07-22 14:54:24.793190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.290 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.290 [2024-07-22 14:54:24.808121] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.290 [2024-07-22 14:54:24.808146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.290 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.290 [2024-07-22 14:54:24.822898] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.290 [2024-07-22 14:54:24.822927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.290 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.290 [2024-07-22 14:54:24.837019] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.290 [2024-07-22 14:54:24.837047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.290 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.290 [2024-07-22 14:54:24.851096] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.290 [2024-07-22 14:54:24.851128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.290 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.290 [2024-07-22 14:54:24.865751] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.290 [2024-07-22 14:54:24.865780] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.290 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.290 [2024-07-22 14:54:24.879740] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.290 [2024-07-22 14:54:24.879768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.290 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.290 [2024-07-22 14:54:24.893848] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.290 [2024-07-22 14:54:24.893877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.290 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.290 [2024-07-22 14:54:24.907604] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.290 [2024-07-22 14:54:24.907634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.290 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.550 [2024-07-22 14:54:24.921906] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.550 [2024-07-22 14:54:24.921935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.550 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.550 [2024-07-22 14:54:24.935782] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.550 [2024-07-22 14:54:24.935809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.550 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.550 [2024-07-22 14:54:24.949886] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.551 [2024-07-22 14:54:24.949913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.551 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.551 [2024-07-22 14:54:24.963791] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.551 [2024-07-22 14:54:24.963819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.551 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.551 [2024-07-22 14:54:24.977855] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.551 [2024-07-22 14:54:24.977884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.551 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.551 [2024-07-22 14:54:24.992119] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.551 [2024-07-22 14:54:24.992146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.551 2024/07/22 14:54:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.551 [2024-07-22 14:54:25.003296] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.551 [2024-07-22 14:54:25.003323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.551 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.551 [2024-07-22 14:54:25.017700] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.551 [2024-07-22 14:54:25.017725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.551 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.551 [2024-07-22 14:54:25.031624] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.551 [2024-07-22 14:54:25.031651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.551 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.551 [2024-07-22 14:54:25.045998] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.551 [2024-07-22 14:54:25.046024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.551 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.551 [2024-07-22 14:54:25.059799] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.551 [2024-07-22 14:54:25.059824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.551 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.551 [2024-07-22 14:54:25.073888] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.551 [2024-07-22 14:54:25.073914] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.551 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.551 [2024-07-22 14:54:25.088232] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.551 [2024-07-22 14:54:25.088258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.551 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.551 [2024-07-22 14:54:25.102204] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.551 [2024-07-22 14:54:25.102232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.551 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.551 [2024-07-22 14:54:25.116734] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.551 [2024-07-22 14:54:25.116759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.551 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.551 [2024-07-22 14:54:25.127534] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.551 [2024-07-22 14:54:25.127561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.551 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.551 [2024-07-22 14:54:25.142402] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.551 [2024-07-22 14:54:25.142430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.551 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.551 [2024-07-22 14:54:25.156449] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.551 [2024-07-22 14:54:25.156477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.551 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.551 [2024-07-22 14:54:25.170545] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.551 [2024-07-22 14:54:25.170571] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.551 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.812 [2024-07-22 14:54:25.184616] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.812 [2024-07-22 14:54:25.184641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.812 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.812 [2024-07-22 14:54:25.198492] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.812 [2024-07-22 14:54:25.198519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.812 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.812 [2024-07-22 14:54:25.213366] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.812 [2024-07-22 14:54:25.213393] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.812 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.812 [2024-07-22 14:54:25.228965] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.812 [2024-07-22 14:54:25.228991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.812 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.812 [2024-07-22 14:54:25.243263] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.812 [2024-07-22 14:54:25.243289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.812 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.812 [2024-07-22 14:54:25.257290] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.812 [2024-07-22 14:54:25.257319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.812 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.812 [2024-07-22 14:54:25.271635] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.812 [2024-07-22 14:54:25.271663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.812 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.812 [2024-07-22 14:54:25.282911] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.812 [2024-07-22 14:54:25.282937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.812 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.812 [2024-07-22 14:54:25.297368] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.812 [2024-07-22 14:54:25.297395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.812 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.812 [2024-07-22 14:54:25.311192] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.812 [2024-07-22 14:54:25.311217] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.812 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.812 [2024-07-22 14:54:25.325369] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.812 [2024-07-22 14:54:25.325398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.812 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.812 [2024-07-22 14:54:25.339860] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.812 [2024-07-22 14:54:25.339887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.812 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.812 [2024-07-22 14:54:25.354164] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.812 [2024-07-22 14:54:25.354191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.812 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.812 [2024-07-22 14:54:25.365414] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.812 [2024-07-22 14:54:25.365441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.812 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.812 [2024-07-22 14:54:25.380126] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.812 [2024-07-22 14:54:25.380153] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.812 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.812 [2024-07-22 14:54:25.393923] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.812 [2024-07-22 14:54:25.393948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.812 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.812 [2024-07-22 14:54:25.402790] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.812 [2024-07-22 14:54:25.402816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.812 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.812 [2024-07-22 14:54:25.411595] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.812 [2024-07-22 14:54:25.411622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.812 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.812 [2024-07-22 14:54:25.421190] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.812 [2024-07-22 14:54:25.421229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.812 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:05.812 [2024-07-22 14:54:25.430130] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:05.812 [2024-07-22 14:54:25.430159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:05.812 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.071 [2024-07-22 14:54:25.445204] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.071 [2024-07-22 14:54:25.445237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.071 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.071 [2024-07-22 14:54:25.456080] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.071 [2024-07-22 14:54:25.456110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.071 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.071 [2024-07-22 14:54:25.471771] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.071 [2024-07-22 14:54:25.471807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.071 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.071 [2024-07-22 14:54:25.487062] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.071 [2024-07-22 14:54:25.487095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.071 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.071 [2024-07-22 14:54:25.501722] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.071 [2024-07-22 14:54:25.501751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.071 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.071 [2024-07-22 14:54:25.512836] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.071 [2024-07-22 14:54:25.512866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.071 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.071 [2024-07-22 14:54:25.527915] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.072 [2024-07-22 14:54:25.527946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.072 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.072 [2024-07-22 14:54:25.538963] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.072 [2024-07-22 14:54:25.538995] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.072 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.072 [2024-07-22 14:54:25.553921] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.072 [2024-07-22 14:54:25.553956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.072 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.072 [2024-07-22 14:54:25.564731] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.072 [2024-07-22 14:54:25.564761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.072 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.072 [2024-07-22 14:54:25.579069] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.072 [2024-07-22 14:54:25.579099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.072 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.072 [2024-07-22 14:54:25.593660] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.072 [2024-07-22 14:54:25.593700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.072 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.072 [2024-07-22 14:54:25.604625] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.072 [2024-07-22 14:54:25.604654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.072 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.072 [2024-07-22 14:54:25.619155] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.072 [2024-07-22 14:54:25.619188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.072 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.072 [2024-07-22 14:54:25.632786] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.072 [2024-07-22 14:54:25.632818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.072 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.072 [2024-07-22 14:54:25.647048] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.072 [2024-07-22 14:54:25.647083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.072 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.072 [2024-07-22 14:54:25.661035] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.072 [2024-07-22 14:54:25.661072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.072 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.072 [2024-07-22 14:54:25.675596] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.072 [2024-07-22 14:54:25.675634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.072 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.072 [2024-07-22 14:54:25.686061] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.072 [2024-07-22 14:54:25.686092] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.072 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.332 [2024-07-22 14:54:25.701423] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.332 [2024-07-22 14:54:25.701456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.332 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.332 [2024-07-22 14:54:25.716460] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.332 [2024-07-22 14:54:25.716493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.332 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.332 [2024-07-22 14:54:25.731176] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.332 [2024-07-22 14:54:25.731206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.332 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.332 [2024-07-22 14:54:25.742643] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.332 [2024-07-22 14:54:25.742683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.332 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.332 [2024-07-22 14:54:25.757948] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.332 [2024-07-22 14:54:25.757986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.332 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.332 [2024-07-22 14:54:25.773375] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.332 [2024-07-22 14:54:25.773417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.332 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.332 [2024-07-22 14:54:25.788364] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.332 [2024-07-22 14:54:25.788400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.332 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.332 [2024-07-22 14:54:25.803881] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.332 [2024-07-22 14:54:25.803926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.332 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.332 [2024-07-22 14:54:25.818392] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.332 [2024-07-22 14:54:25.818425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.332 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.332 [2024-07-22 14:54:25.832824] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.332 [2024-07-22 14:54:25.832857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.332 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.332 [2024-07-22 14:54:25.846562] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.332 [2024-07-22 14:54:25.846593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.332 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.332 [2024-07-22 14:54:25.861630] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.332 [2024-07-22 14:54:25.861678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.332 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.332 [2024-07-22 14:54:25.878152] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.332 [2024-07-22 14:54:25.878189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.332 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.332 [2024-07-22 14:54:25.893955] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.332 [2024-07-22 14:54:25.893993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.333 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.333 [2024-07-22 14:54:25.908952] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.333 [2024-07-22 14:54:25.908987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.333 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.333 [2024-07-22 14:54:25.920023] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.333 [2024-07-22 14:54:25.920051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.333 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.333 [2024-07-22 14:54:25.934841] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.333 [2024-07-22 14:54:25.934869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.333 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.333 [2024-07-22 14:54:25.948660] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.333 [2024-07-22 14:54:25.948699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.333 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.593 [2024-07-22 14:54:25.963035] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.593 [2024-07-22 14:54:25.963064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.593 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.593 [2024-07-22 14:54:25.974181] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.593 [2024-07-22 14:54:25.974210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.593 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.593 [2024-07-22 14:54:25.988490] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.593 [2024-07-22 14:54:25.988521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.593 2024/07/22 14:54:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.593 [2024-07-22 14:54:26.002638] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.593 [2024-07-22 14:54:26.002685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.593 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.593 [2024-07-22 14:54:26.016558] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.593 [2024-07-22 14:54:26.016596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.593 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.593 [2024-07-22 14:54:26.031106] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.593 [2024-07-22 14:54:26.031134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.593 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.593 [2024-07-22 14:54:26.046552] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.593 [2024-07-22 14:54:26.046583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.593 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.593 [2024-07-22 14:54:26.061105] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.593 [2024-07-22 14:54:26.061138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.593 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.593 [2024-07-22 14:54:26.075839] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.593 [2024-07-22 14:54:26.075875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.593 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.593 [2024-07-22 14:54:26.091903] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.593 [2024-07-22 14:54:26.091937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.593 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.593 [2024-07-22 14:54:26.106430] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.593 [2024-07-22 14:54:26.106461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.593 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.593 [2024-07-22 14:54:26.120788] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.593 [2024-07-22 14:54:26.120819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.593 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.593 [2024-07-22 14:54:26.131450] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.593 [2024-07-22 14:54:26.131481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.593 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.593 [2024-07-22 14:54:26.146760] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.593 [2024-07-22 14:54:26.146791] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.593 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.593 [2024-07-22 14:54:26.161891] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.594 [2024-07-22 14:54:26.161921] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.594 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.594 [2024-07-22 14:54:26.176605] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.594 [2024-07-22 14:54:26.176635] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.594 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.594 [2024-07-22 14:54:26.190863] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.594 [2024-07-22 14:54:26.190892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.594 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.594 [2024-07-22 14:54:26.205040] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.594 [2024-07-22 14:54:26.205072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.594 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.594 [2024-07-22 14:54:26.219545] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.594 [2024-07-22 14:54:26.219582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.594 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.854 [2024-07-22 14:54:26.231011] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.854 [2024-07-22 14:54:26.231041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.854 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.854 [2024-07-22 14:54:26.245406] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.854 [2024-07-22 14:54:26.245437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.854 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.854 [2024-07-22 14:54:26.259254] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.854 [2024-07-22 14:54:26.259283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.854 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.854 [2024-07-22 14:54:26.273384] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.854 [2024-07-22 14:54:26.273410] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.854 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.854 [2024-07-22 14:54:26.287305] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.854 [2024-07-22 14:54:26.287331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.854 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.854 [2024-07-22 14:54:26.301250] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.854 [2024-07-22 14:54:26.301279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.854 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.854 [2024-07-22 14:54:26.315751] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.854 [2024-07-22 14:54:26.315776] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.854 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.854 [2024-07-22 14:54:26.326238] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.854 [2024-07-22 14:54:26.326266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.854 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.854 [2024-07-22 14:54:26.340698] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.854 [2024-07-22 14:54:26.340724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.854 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.854 [2024-07-22 14:54:26.355519] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.854 [2024-07-22 14:54:26.355549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.854 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.854 [2024-07-22 14:54:26.370396] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.854 [2024-07-22 14:54:26.370423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.854 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.854 [2024-07-22 14:54:26.384444] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.854 [2024-07-22 14:54:26.384473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.854 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.854 [2024-07-22 14:54:26.398946] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.854 [2024-07-22 14:54:26.398973] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.854 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.854 [2024-07-22 14:54:26.410100] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.854 [2024-07-22 14:54:26.410127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.854 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.854 [2024-07-22 14:54:26.425091] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.854 [2024-07-22 14:54:26.425116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.854 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.855 [2024-07-22 14:54:26.440319] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.855 [2024-07-22 14:54:26.440344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.855 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.855 [2024-07-22 14:54:26.454743] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.855 [2024-07-22 14:54:26.454768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.855 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.855 [2024-07-22 14:54:26.468090] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.855 [2024-07-22 14:54:26.468115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:06.855 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:06.855 [2024-07-22 14:54:26.482424] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:06.855 [2024-07-22 14:54:26.482450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.140 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.141 [2024-07-22 14:54:26.495868] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.141 [2024-07-22 14:54:26.495893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.141 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.141 [2024-07-22 14:54:26.509388] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.141 [2024-07-22 14:54:26.509412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.141 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.141 [2024-07-22 14:54:26.523042] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.141 [2024-07-22 14:54:26.523066] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.141 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.141 [2024-07-22 14:54:26.537253] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.141 [2024-07-22 14:54:26.537282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.141 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.141 [2024-07-22 14:54:26.551092] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.141 [2024-07-22 14:54:26.551117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.141 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.141 [2024-07-22 14:54:26.565516] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.141 [2024-07-22 14:54:26.565543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.141 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.141 [2024-07-22 14:54:26.579988] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.141 [2024-07-22 14:54:26.580015] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.141 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.141 [2024-07-22 14:54:26.594332] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.141 [2024-07-22 14:54:26.594361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.141 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.141 [2024-07-22 14:54:26.608729] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.141 [2024-07-22 14:54:26.608759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.141 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.141 [2024-07-22 14:54:26.624216] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.141 [2024-07-22 14:54:26.624247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.141 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.141 [2024-07-22 14:54:26.639204] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.141 [2024-07-22 14:54:26.639233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.141 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.141 [2024-07-22 14:54:26.655030] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.141 [2024-07-22 14:54:26.655062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.141 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.141 [2024-07-22 14:54:26.665999] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.141 [2024-07-22 14:54:26.666026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.141 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.141 [2024-07-22 14:54:26.681441] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.141 [2024-07-22 14:54:26.681474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.141 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.141 [2024-07-22 14:54:26.692735] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.141 [2024-07-22 14:54:26.692765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.141 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.141 [2024-07-22 14:54:26.708051] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.141 [2024-07-22 14:54:26.708085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.141 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.141 [2024-07-22 14:54:26.723608] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.141 [2024-07-22 14:54:26.723642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.141 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.141 [2024-07-22 14:54:26.738174] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.141 [2024-07-22 14:54:26.738210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.141 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.141 [2024-07-22 14:54:26.753074] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.141 [2024-07-22 14:54:26.753106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.141 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.410 [2024-07-22 14:54:26.768998] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.410 [2024-07-22 14:54:26.769030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.410 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.410 [2024-07-22 14:54:26.783896] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.410 [2024-07-22 14:54:26.783929] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.410 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.410 [2024-07-22 14:54:26.797999] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.410 [2024-07-22 14:54:26.798031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.410 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.410 [2024-07-22 14:54:26.812640] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.410 [2024-07-22 14:54:26.812681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.410 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.410 [2024-07-22 14:54:26.824348] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.410 [2024-07-22 14:54:26.824376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.410 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.410 [2024-07-22 14:54:26.839099] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.410 [2024-07-22 14:54:26.839145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.410 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.410 [2024-07-22 14:54:26.854615] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.410 [2024-07-22 14:54:26.854650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.410 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.410 [2024-07-22 14:54:26.869244] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.410 [2024-07-22 14:54:26.869276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.410 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.410 [2024-07-22 14:54:26.880339] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.410 [2024-07-22 14:54:26.880368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.410 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.410 [2024-07-22 14:54:26.895405] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.410 [2024-07-22 14:54:26.895434] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.410 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.410 [2024-07-22 14:54:26.906243] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.410 [2024-07-22 14:54:26.906273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.410 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.410 [2024-07-22 14:54:26.920707] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.410 [2024-07-22 14:54:26.920737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.410 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.410 [2024-07-22 14:54:26.934774] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.410 [2024-07-22 14:54:26.934801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.410 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.410 [2024-07-22 14:54:26.948770] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.410 [2024-07-22 14:54:26.948804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.410 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.410 [2024-07-22 14:54:26.962663] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.410 [2024-07-22 14:54:26.962704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.410 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.410 [2024-07-22 14:54:26.976374] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.410 [2024-07-22 14:54:26.976405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.410 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.410 [2024-07-22 14:54:26.990885] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.410 [2024-07-22 14:54:26.990914] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.410 2024/07/22 14:54:26 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.410 [2024-07-22 14:54:27.004995] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.410 [2024-07-22 14:54:27.005026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.410 2024/07/22 14:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.410 [2024-07-22 14:54:27.019110] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.410 [2024-07-22 14:54:27.019145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.410 2024/07/22 14:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.410 [2024-07-22 14:54:27.033069] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.410 [2024-07-22 14:54:27.033100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.410 2024/07/22 14:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.671 [2024-07-22 14:54:27.047230] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.671 [2024-07-22 14:54:27.047271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.671 2024/07/22 14:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.671 [2024-07-22 14:54:27.061501] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.671 [2024-07-22 14:54:27.061541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.671 2024/07/22 14:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.671 [2024-07-22 14:54:27.075841] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.671 [2024-07-22 14:54:27.075875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.671 2024/07/22 14:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.671 [2024-07-22 14:54:27.090077] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.671 [2024-07-22 14:54:27.090108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.671 2024/07/22 14:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.671 [2024-07-22 14:54:27.104100] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.671 [2024-07-22 14:54:27.104129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.671 2024/07/22 14:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.671 [2024-07-22 14:54:27.118069] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.671 [2024-07-22 14:54:27.118097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.671 2024/07/22 14:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.671 [2024-07-22 14:54:27.131779] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.671 [2024-07-22 14:54:27.131806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.671 2024/07/22 14:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.671 [2024-07-22 14:54:27.145603] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.671 [2024-07-22 14:54:27.145633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.671 2024/07/22 14:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.671 [2024-07-22 14:54:27.159942] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.671 [2024-07-22 14:54:27.159970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.671 2024/07/22 14:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.671 [2024-07-22 14:54:27.174964] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.671 [2024-07-22 14:54:27.174995] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.671 2024/07/22 14:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.671 [2024-07-22 14:54:27.191148] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.671 [2024-07-22 14:54:27.191178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.671 2024/07/22 14:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.671 [2024-07-22 14:54:27.202991] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.671 [2024-07-22 14:54:27.203021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.671 2024/07/22 14:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.671 [2024-07-22 14:54:27.218865] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.671 [2024-07-22 14:54:27.218898] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.671 2024/07/22 14:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.671 [2024-07-22 14:54:27.233024] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.671 [2024-07-22 14:54:27.233056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.671 2024/07/22 14:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.671 00:41:07.671 Latency(us) 00:41:07.671 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:07.671 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:41:07.671 Nvme1n1 : 5.01 15991.74 124.94 0.00 0.00 7995.41 3634.53 19002.58 00:41:07.671 =================================================================================================================== 00:41:07.671 Total : 15991.74 124.94 0.00 0.00 7995.41 3634.53 19002.58 00:41:07.671 [2024-07-22 14:54:27.242800] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.671 [2024-07-22 14:54:27.242825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.671 2024/07/22 14:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.671 [2024-07-22 14:54:27.254792] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.671 [2024-07-22 14:54:27.254823] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.671 2024/07/22 14:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.671 [2024-07-22 14:54:27.266780] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.671 [2024-07-22 14:54:27.266811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.671 2024/07/22 14:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.671 [2024-07-22 14:54:27.278791] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.671 [2024-07-22 14:54:27.278823] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.671 2024/07/22 14:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.671 [2024-07-22 14:54:27.290736] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.671 [2024-07-22 14:54:27.290763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.671 2024/07/22 14:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.931 [2024-07-22 14:54:27.302714] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.931 [2024-07-22 14:54:27.302746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.931 2024/07/22 14:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.931 [2024-07-22 14:54:27.314697] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.931 [2024-07-22 14:54:27.314728] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.931 2024/07/22 14:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.931 [2024-07-22 14:54:27.326673] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.931 [2024-07-22 14:54:27.326704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.931 2024/07/22 14:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.931 [2024-07-22 14:54:27.338654] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.931 [2024-07-22 14:54:27.338688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.931 2024/07/22 14:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.931 [2024-07-22 14:54:27.350623] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.931 [2024-07-22 14:54:27.350646] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.931 2024/07/22 14:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.931 [2024-07-22 14:54:27.362603] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.931 [2024-07-22 14:54:27.362629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.931 2024/07/22 14:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.931 [2024-07-22 14:54:27.374580] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.931 [2024-07-22 14:54:27.374604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.931 2024/07/22 14:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.931 [2024-07-22 14:54:27.386555] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.931 [2024-07-22 14:54:27.386581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.931 2024/07/22 14:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.931 [2024-07-22 14:54:27.398578] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.931 [2024-07-22 14:54:27.398606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.931 2024/07/22 14:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.931 [2024-07-22 14:54:27.410512] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.932 [2024-07-22 14:54:27.410533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.932 2024/07/22 14:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.932 [2024-07-22 14:54:27.422490] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:41:07.932 [2024-07-22 14:54:27.422510] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:07.932 2024/07/22 14:54:27 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:07.932 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (91460) - No such process 00:41:07.932 14:54:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 91460 00:41:07.932 14:54:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:41:07.932 14:54:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:07.932 14:54:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:07.932 14:54:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:07.932 14:54:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:41:07.932 14:54:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:07.932 14:54:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:07.932 delay0 00:41:07.932 14:54:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:07.932 14:54:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:41:07.932 14:54:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:07.932 14:54:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:07.932 14:54:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:07.932 14:54:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:41:08.192 [2024-07-22 14:54:27.636758] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:41:14.769 Initializing NVMe Controllers 00:41:14.769 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:14.769 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:41:14.769 Initialization complete. Launching workers. 00:41:14.769 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 79 00:41:14.769 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 366, failed to submit 33 00:41:14.769 success 177, unsuccess 189, failed 0 00:41:14.769 14:54:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:41:14.769 14:54:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:41:14.769 14:54:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:41:14.769 14:54:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:41:14.769 14:54:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:41:14.769 14:54:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:41:14.769 14:54:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:41:14.769 14:54:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:41:14.769 rmmod nvme_tcp 00:41:14.769 rmmod nvme_fabrics 00:41:14.769 rmmod nvme_keyring 00:41:14.769 14:54:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:41:14.769 14:54:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:41:14.769 14:54:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:41:14.769 14:54:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 91292 ']' 00:41:14.769 14:54:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 91292 00:41:14.769 14:54:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 91292 ']' 00:41:14.769 14:54:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 91292 00:41:14.769 14:54:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:41:14.769 14:54:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:41:14.769 14:54:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 91292 00:41:14.769 killing process with pid 91292 00:41:14.769 14:54:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:41:14.769 14:54:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:41:14.769 14:54:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 91292' 00:41:14.769 14:54:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 91292 00:41:14.769 14:54:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 91292 00:41:14.769 14:54:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:41:14.769 14:54:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:41:14.769 14:54:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:41:14.769 14:54:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:41:14.769 14:54:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:41:14.769 14:54:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:14.769 14:54:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:14.769 14:54:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:14.769 14:54:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:41:14.769 00:41:14.769 real 0m24.497s 00:41:14.769 user 0m40.702s 00:41:14.769 sys 0m5.731s 00:41:14.769 14:54:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:41:14.769 14:54:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:41:14.769 ************************************ 00:41:14.769 END TEST nvmf_zcopy 00:41:14.769 ************************************ 00:41:14.769 14:54:34 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:41:14.769 14:54:34 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:41:14.769 14:54:34 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:41:14.769 14:54:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:14.769 ************************************ 00:41:14.769 START TEST nvmf_nmic 00:41:14.769 ************************************ 00:41:14.769 14:54:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:41:14.769 * Looking for test storage... 00:41:14.769 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:41:14.769 14:54:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:41:14.769 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:41:14.769 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:14.769 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:14.769 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:14.769 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:14.769 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:14.769 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:14.769 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:14.769 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:14.769 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:14.769 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:14.769 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:41:14.769 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:41:14.769 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:14.769 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:41:14.770 Cannot find device "nvmf_tgt_br" 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:41:14.770 Cannot find device "nvmf_tgt_br2" 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:41:14.770 Cannot find device "nvmf_tgt_br" 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:41:14.770 Cannot find device "nvmf_tgt_br2" 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:41:14.770 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:41:15.029 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:41:15.029 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:41:15.029 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:41:15.029 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:41:15.029 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:41:15.029 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:41:15.029 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:41:15.029 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:41:15.029 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:41:15.029 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:41:15.029 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:41:15.029 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:41:15.029 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:41:15.029 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:41:15.029 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:41:15.029 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:41:15.029 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:41:15.029 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:41:15.029 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:41:15.029 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:41:15.029 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:41:15.029 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:41:15.029 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:41:15.029 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:41:15.029 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:41:15.029 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:41:15.029 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:41:15.029 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:41:15.029 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:41:15.029 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:41:15.029 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:41:15.029 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:15.029 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:41:15.029 00:41:15.029 --- 10.0.0.2 ping statistics --- 00:41:15.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:15.029 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:41:15.029 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:41:15.029 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:41:15.029 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.139 ms 00:41:15.029 00:41:15.029 --- 10.0.0.3 ping statistics --- 00:41:15.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:15.029 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:41:15.029 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:41:15.029 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:15.029 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:41:15.029 00:41:15.029 --- 10.0.0.1 ping statistics --- 00:41:15.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:15.029 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:41:15.029 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:15.029 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:41:15.029 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:41:15.029 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:15.029 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:41:15.029 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:41:15.029 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:15.029 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:41:15.029 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:41:15.288 14:54:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:41:15.288 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:41:15.288 14:54:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:41:15.288 14:54:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:15.288 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=91779 00:41:15.288 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 91779 00:41:15.288 14:54:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 91779 ']' 00:41:15.288 14:54:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:15.288 14:54:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:41:15.288 14:54:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:41:15.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:15.288 14:54:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:15.288 14:54:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:41:15.288 14:54:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:15.288 [2024-07-22 14:54:34.736792] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:41:15.288 [2024-07-22 14:54:34.736867] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:15.288 [2024-07-22 14:54:34.864117] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:15.546 [2024-07-22 14:54:34.926318] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:15.546 [2024-07-22 14:54:34.926366] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:15.546 [2024-07-22 14:54:34.926374] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:15.546 [2024-07-22 14:54:34.926379] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:15.546 [2024-07-22 14:54:34.926384] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:15.546 [2024-07-22 14:54:34.926515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:41:15.546 [2024-07-22 14:54:34.926803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:41:15.546 [2024-07-22 14:54:34.926860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:41:15.546 [2024-07-22 14:54:34.926922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:41:16.114 14:54:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:41:16.114 14:54:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:41:16.114 14:54:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:41:16.114 14:54:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:16.114 14:54:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:16.114 14:54:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:16.114 14:54:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:16.114 14:54:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:16.114 14:54:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:16.114 [2024-07-22 14:54:35.743508] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:16.374 14:54:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:16.375 14:54:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:16.375 14:54:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:16.375 14:54:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:16.375 Malloc0 00:41:16.375 14:54:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:16.375 14:54:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:41:16.375 14:54:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:16.375 14:54:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:16.375 14:54:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:16.375 14:54:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:16.375 14:54:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:16.375 14:54:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:16.375 14:54:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:16.375 14:54:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:16.375 14:54:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:16.375 14:54:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:16.375 [2024-07-22 14:54:35.816819] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:16.375 test case1: single bdev can't be used in multiple subsystems 00:41:16.375 14:54:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:16.375 14:54:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:41:16.375 14:54:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:41:16.375 14:54:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:16.375 14:54:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:16.375 14:54:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:16.375 14:54:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:41:16.375 14:54:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:16.375 14:54:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:16.375 14:54:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:16.375 14:54:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:41:16.375 14:54:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:41:16.375 14:54:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:16.375 14:54:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:16.375 [2024-07-22 14:54:35.852677] bdev.c:8035:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:41:16.375 [2024-07-22 14:54:35.852813] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:41:16.375 [2024-07-22 14:54:35.852884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:41:16.375 2024/07/22 14:54:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:41:16.375 request: 00:41:16.375 { 00:41:16.375 "method": "nvmf_subsystem_add_ns", 00:41:16.375 "params": { 00:41:16.375 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:41:16.375 "namespace": { 00:41:16.375 "bdev_name": "Malloc0", 00:41:16.375 "no_auto_visible": false 00:41:16.375 } 00:41:16.375 } 00:41:16.375 } 00:41:16.375 Got JSON-RPC error response 00:41:16.375 GoRPCClient: error on JSON-RPC call 00:41:16.375 14:54:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:41:16.375 14:54:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:41:16.375 14:54:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:41:16.375 14:54:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:41:16.375 Adding namespace failed - expected result. 00:41:16.375 14:54:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:41:16.375 test case2: host connect to nvmf target in multiple paths 00:41:16.375 14:54:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:41:16.375 14:54:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:16.375 14:54:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:16.375 [2024-07-22 14:54:35.868759] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:41:16.375 14:54:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:16.375 14:54:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:41:16.635 14:54:36 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:41:16.635 14:54:36 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:41:16.635 14:54:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:41:16.635 14:54:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:41:16.635 14:54:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:41:16.635 14:54:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:41:19.188 14:54:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:41:19.188 14:54:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:41:19.188 14:54:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:41:19.188 14:54:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:41:19.188 14:54:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:41:19.188 14:54:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:41:19.188 14:54:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:41:19.188 [global] 00:41:19.188 thread=1 00:41:19.188 invalidate=1 00:41:19.188 rw=write 00:41:19.188 time_based=1 00:41:19.188 runtime=1 00:41:19.188 ioengine=libaio 00:41:19.188 direct=1 00:41:19.188 bs=4096 00:41:19.188 iodepth=1 00:41:19.188 norandommap=0 00:41:19.188 numjobs=1 00:41:19.188 00:41:19.188 verify_dump=1 00:41:19.188 verify_backlog=512 00:41:19.188 verify_state_save=0 00:41:19.188 do_verify=1 00:41:19.188 verify=crc32c-intel 00:41:19.188 [job0] 00:41:19.188 filename=/dev/nvme0n1 00:41:19.188 Could not set queue depth (nvme0n1) 00:41:19.188 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:19.188 fio-3.35 00:41:19.188 Starting 1 thread 00:41:20.127 00:41:20.127 job0: (groupid=0, jobs=1): err= 0: pid=91890: Mon Jul 22 14:54:39 2024 00:41:20.127 read: IOPS=4854, BW=19.0MiB/s (19.9MB/s)(19.0MiB/1001msec) 00:41:20.127 slat (nsec): min=6403, max=37062, avg=8205.56, stdev=2155.72 00:41:20.127 clat (usec): min=81, max=176, avg=101.98, stdev=10.05 00:41:20.127 lat (usec): min=87, max=185, avg=110.19, stdev=10.71 00:41:20.127 clat percentiles (usec): 00:41:20.127 | 1.00th=[ 86], 5.00th=[ 89], 10.00th=[ 91], 20.00th=[ 94], 00:41:20.127 | 30.00th=[ 96], 40.00th=[ 98], 50.00th=[ 101], 60.00th=[ 103], 00:41:20.127 | 70.00th=[ 106], 80.00th=[ 110], 90.00th=[ 115], 95.00th=[ 121], 00:41:20.127 | 99.00th=[ 133], 99.50th=[ 139], 99.90th=[ 161], 99.95th=[ 169], 00:41:20.127 | 99.99th=[ 176] 00:41:20.127 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:41:20.127 slat (usec): min=9, max=127, avg=13.42, stdev= 6.19 00:41:20.127 clat (usec): min=54, max=1809, avg=75.44, stdev=26.62 00:41:20.127 lat (usec): min=68, max=1828, avg=88.86, stdev=28.21 00:41:20.127 clat percentiles (usec): 00:41:20.127 | 1.00th=[ 63], 5.00th=[ 65], 10.00th=[ 67], 20.00th=[ 69], 00:41:20.127 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 76], 00:41:20.127 | 70.00th=[ 78], 80.00th=[ 81], 90.00th=[ 87], 95.00th=[ 92], 00:41:20.127 | 99.00th=[ 105], 99.50th=[ 111], 99.90th=[ 149], 99.95th=[ 297], 00:41:20.127 | 99.99th=[ 1811] 00:41:20.127 bw ( KiB/s): min=20480, max=20480, per=100.00%, avg=20480.00, stdev= 0.00, samples=1 00:41:20.127 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:41:20.127 lat (usec) : 100=74.08%, 250=25.88%, 500=0.03% 00:41:20.127 lat (msec) : 2=0.01% 00:41:20.127 cpu : usr=2.40%, sys=7.30%, ctx=9985, majf=0, minf=2 00:41:20.127 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:20.127 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:20.127 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:20.127 issued rwts: total=4859,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:20.127 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:20.127 00:41:20.127 Run status group 0 (all jobs): 00:41:20.127 READ: bw=19.0MiB/s (19.9MB/s), 19.0MiB/s-19.0MiB/s (19.9MB/s-19.9MB/s), io=19.0MiB (19.9MB), run=1001-1001msec 00:41:20.127 WRITE: bw=20.0MiB/s (20.9MB/s), 20.0MiB/s-20.0MiB/s (20.9MB/s-20.9MB/s), io=20.0MiB (21.0MB), run=1001-1001msec 00:41:20.127 00:41:20.127 Disk stats (read/write): 00:41:20.127 nvme0n1: ios=4410/4608, merge=0/0, ticks=478/380, in_queue=858, util=91.08% 00:41:20.127 14:54:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:41:20.127 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:41:20.127 14:54:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:41:20.127 14:54:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:41:20.127 14:54:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:41:20.127 14:54:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:20.127 14:54:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:41:20.127 14:54:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:20.127 14:54:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:41:20.127 14:54:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:41:20.127 14:54:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:41:20.127 14:54:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:41:20.127 14:54:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:41:20.127 14:54:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:41:20.127 14:54:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:41:20.127 14:54:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:41:20.128 14:54:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:41:20.128 rmmod nvme_tcp 00:41:20.128 rmmod nvme_fabrics 00:41:20.128 rmmod nvme_keyring 00:41:20.128 14:54:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:41:20.387 14:54:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:41:20.387 14:54:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:41:20.387 14:54:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 91779 ']' 00:41:20.387 14:54:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 91779 00:41:20.387 14:54:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 91779 ']' 00:41:20.387 14:54:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 91779 00:41:20.387 14:54:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:41:20.387 14:54:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:41:20.387 14:54:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 91779 00:41:20.387 killing process with pid 91779 00:41:20.387 14:54:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:41:20.387 14:54:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:41:20.387 14:54:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 91779' 00:41:20.387 14:54:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 91779 00:41:20.387 14:54:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 91779 00:41:20.387 14:54:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:41:20.387 14:54:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:41:20.387 14:54:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:41:20.387 14:54:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:41:20.387 14:54:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:41:20.387 14:54:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:20.387 14:54:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:20.387 14:54:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:20.647 14:54:40 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:41:20.647 00:41:20.647 real 0m5.960s 00:41:20.647 user 0m20.208s 00:41:20.647 sys 0m1.238s 00:41:20.647 14:54:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:41:20.647 14:54:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:41:20.647 ************************************ 00:41:20.647 END TEST nvmf_nmic 00:41:20.647 ************************************ 00:41:20.647 14:54:40 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:41:20.647 14:54:40 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:41:20.647 14:54:40 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:41:20.647 14:54:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:20.647 ************************************ 00:41:20.647 START TEST nvmf_fio_target 00:41:20.647 ************************************ 00:41:20.647 14:54:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:41:20.907 * Looking for test storage... 00:41:20.907 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:41:20.907 14:54:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:41:20.907 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:41:20.907 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:20.907 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:20.907 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:20.907 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:20.907 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:20.907 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:20.907 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:20.907 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:20.907 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:20.907 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:41:20.908 Cannot find device "nvmf_tgt_br" 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:41:20.908 Cannot find device "nvmf_tgt_br2" 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:41:20.908 Cannot find device "nvmf_tgt_br" 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:41:20.908 Cannot find device "nvmf_tgt_br2" 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:41:20.908 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:41:20.908 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:41:20.908 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:41:21.168 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:41:21.168 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:41:21.168 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:41:21.168 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:41:21.168 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:41:21.168 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:41:21.168 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:41:21.168 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:41:21.168 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:41:21.168 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:41:21.168 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:41:21.168 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:41:21.168 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:41:21.168 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:41:21.168 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:41:21.168 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:41:21.168 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:41:21.168 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:41:21.168 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:41:21.168 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:21.168 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:41:21.168 00:41:21.168 --- 10.0.0.2 ping statistics --- 00:41:21.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:21.168 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:41:21.168 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:41:21.168 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:41:21.168 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:41:21.168 00:41:21.168 --- 10.0.0.3 ping statistics --- 00:41:21.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:21.168 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:41:21.168 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:41:21.168 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:21.168 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:41:21.168 00:41:21.168 --- 10.0.0.1 ping statistics --- 00:41:21.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:21.168 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:41:21.168 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:21.168 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:41:21.168 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:41:21.168 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:21.168 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:41:21.168 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:41:21.169 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:21.169 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:41:21.169 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:41:21.169 14:54:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:41:21.169 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:41:21.169 14:54:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:41:21.169 14:54:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:21.169 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=92067 00:41:21.169 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:41:21.169 14:54:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 92067 00:41:21.169 14:54:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 92067 ']' 00:41:21.169 14:54:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:21.169 14:54:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:41:21.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:21.169 14:54:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:21.169 14:54:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:41:21.169 14:54:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:21.169 [2024-07-22 14:54:40.774894] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:41:21.169 [2024-07-22 14:54:40.774954] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:21.427 [2024-07-22 14:54:40.915834] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:21.427 [2024-07-22 14:54:40.964835] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:21.427 [2024-07-22 14:54:40.964890] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:21.427 [2024-07-22 14:54:40.964896] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:21.427 [2024-07-22 14:54:40.964900] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:21.427 [2024-07-22 14:54:40.964904] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:21.427 [2024-07-22 14:54:40.965024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:41:21.427 [2024-07-22 14:54:40.965214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:41:21.427 [2024-07-22 14:54:40.965407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:41:21.428 [2024-07-22 14:54:40.965412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:41:22.390 14:54:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:41:22.390 14:54:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:41:22.390 14:54:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:41:22.390 14:54:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:22.390 14:54:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:22.390 14:54:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:22.390 14:54:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:41:22.390 [2024-07-22 14:54:41.875835] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:22.390 14:54:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:22.649 14:54:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:41:22.649 14:54:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:22.908 14:54:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:41:22.908 14:54:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:23.168 14:54:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:41:23.168 14:54:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:23.168 14:54:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:41:23.168 14:54:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:41:23.428 14:54:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:23.687 14:54:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:41:23.687 14:54:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:23.947 14:54:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:41:23.947 14:54:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:24.207 14:54:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:41:24.207 14:54:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:41:24.207 14:54:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:41:24.465 14:54:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:41:24.465 14:54:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:24.724 14:54:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:41:24.724 14:54:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:41:24.983 14:54:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:24.983 [2024-07-22 14:54:44.583760] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:25.241 14:54:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:41:25.241 14:54:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:41:25.499 14:54:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:41:25.758 14:54:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:41:25.758 14:54:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:41:25.758 14:54:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:41:25.758 14:54:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:41:25.758 14:54:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:41:25.758 14:54:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:41:27.664 14:54:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:41:27.664 14:54:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:41:27.664 14:54:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:41:27.664 14:54:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:41:27.664 14:54:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:41:27.664 14:54:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:41:27.664 14:54:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:41:27.664 [global] 00:41:27.664 thread=1 00:41:27.664 invalidate=1 00:41:27.664 rw=write 00:41:27.664 time_based=1 00:41:27.664 runtime=1 00:41:27.664 ioengine=libaio 00:41:27.665 direct=1 00:41:27.665 bs=4096 00:41:27.665 iodepth=1 00:41:27.665 norandommap=0 00:41:27.665 numjobs=1 00:41:27.665 00:41:27.665 verify_dump=1 00:41:27.665 verify_backlog=512 00:41:27.665 verify_state_save=0 00:41:27.665 do_verify=1 00:41:27.665 verify=crc32c-intel 00:41:27.665 [job0] 00:41:27.665 filename=/dev/nvme0n1 00:41:27.665 [job1] 00:41:27.665 filename=/dev/nvme0n2 00:41:27.665 [job2] 00:41:27.665 filename=/dev/nvme0n3 00:41:27.665 [job3] 00:41:27.665 filename=/dev/nvme0n4 00:41:27.665 Could not set queue depth (nvme0n1) 00:41:27.665 Could not set queue depth (nvme0n2) 00:41:27.665 Could not set queue depth (nvme0n3) 00:41:27.665 Could not set queue depth (nvme0n4) 00:41:27.924 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:27.924 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:27.924 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:27.924 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:27.924 fio-3.35 00:41:27.924 Starting 4 threads 00:41:29.301 00:41:29.301 job0: (groupid=0, jobs=1): err= 0: pid=92354: Mon Jul 22 14:54:48 2024 00:41:29.301 read: IOPS=3064, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:41:29.301 slat (nsec): min=5593, max=48939, avg=8330.57, stdev=2330.08 00:41:29.301 clat (usec): min=107, max=352, avg=174.42, stdev=54.76 00:41:29.301 lat (usec): min=115, max=361, avg=182.75, stdev=54.67 00:41:29.301 clat percentiles (usec): 00:41:29.301 | 1.00th=[ 120], 5.00th=[ 125], 10.00th=[ 128], 20.00th=[ 133], 00:41:29.301 | 30.00th=[ 137], 40.00th=[ 141], 50.00th=[ 147], 60.00th=[ 155], 00:41:29.301 | 70.00th=[ 210], 80.00th=[ 231], 90.00th=[ 255], 95.00th=[ 285], 00:41:29.301 | 99.00th=[ 322], 99.50th=[ 330], 99.90th=[ 347], 99.95th=[ 351], 00:41:29.301 | 99.99th=[ 355] 00:41:29.301 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:41:29.301 slat (usec): min=6, max=213, avg=13.59, stdev= 8.45 00:41:29.301 clat (usec): min=79, max=286, avg=127.40, stdev=31.87 00:41:29.301 lat (usec): min=91, max=414, avg=140.99, stdev=32.78 00:41:29.301 clat percentiles (usec): 00:41:29.301 | 1.00th=[ 93], 5.00th=[ 98], 10.00th=[ 101], 20.00th=[ 105], 00:41:29.301 | 30.00th=[ 109], 40.00th=[ 112], 50.00th=[ 116], 60.00th=[ 120], 00:41:29.301 | 70.00th=[ 129], 80.00th=[ 151], 90.00th=[ 178], 95.00th=[ 198], 00:41:29.301 | 99.00th=[ 231], 99.50th=[ 247], 99.90th=[ 262], 99.95th=[ 277], 00:41:29.301 | 99.99th=[ 289] 00:41:29.301 bw ( KiB/s): min=16351, max=16351, per=31.70%, avg=16351.00, stdev= 0.00, samples=1 00:41:29.301 iops : min= 4087, max= 4087, avg=4087.00, stdev= 0.00, samples=1 00:41:29.301 lat (usec) : 100=3.93%, 250=90.18%, 500=5.90% 00:41:29.301 cpu : usr=0.60%, sys=5.70%, ctx=6140, majf=0, minf=10 00:41:29.301 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:29.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:29.301 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:29.301 issued rwts: total=3068,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:29.301 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:29.301 job1: (groupid=0, jobs=1): err= 0: pid=92355: Mon Jul 22 14:54:48 2024 00:41:29.301 read: IOPS=2882, BW=11.3MiB/s (11.8MB/s)(11.3MiB/1001msec) 00:41:29.301 slat (nsec): min=4778, max=31139, avg=8142.44, stdev=2072.57 00:41:29.301 clat (usec): min=110, max=8094, avg=182.19, stdev=251.69 00:41:29.301 lat (usec): min=118, max=8102, avg=190.33, stdev=251.78 00:41:29.301 clat percentiles (usec): 00:41:29.301 | 1.00th=[ 119], 5.00th=[ 125], 10.00th=[ 128], 20.00th=[ 133], 00:41:29.301 | 30.00th=[ 137], 40.00th=[ 141], 50.00th=[ 145], 60.00th=[ 151], 00:41:29.301 | 70.00th=[ 202], 80.00th=[ 229], 90.00th=[ 251], 95.00th=[ 277], 00:41:29.301 | 99.00th=[ 322], 99.50th=[ 338], 99.90th=[ 3523], 99.95th=[ 7898], 00:41:29.301 | 99.99th=[ 8094] 00:41:29.301 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:41:29.301 slat (usec): min=6, max=147, avg=13.81, stdev= 7.70 00:41:29.301 clat (usec): min=88, max=1116, avg=130.97, stdev=38.93 00:41:29.301 lat (usec): min=100, max=1127, avg=144.78, stdev=39.26 00:41:29.301 clat percentiles (usec): 00:41:29.301 | 1.00th=[ 94], 5.00th=[ 99], 10.00th=[ 101], 20.00th=[ 105], 00:41:29.301 | 30.00th=[ 109], 40.00th=[ 113], 50.00th=[ 117], 60.00th=[ 123], 00:41:29.301 | 70.00th=[ 135], 80.00th=[ 161], 90.00th=[ 184], 95.00th=[ 208], 00:41:29.301 | 99.00th=[ 239], 99.50th=[ 249], 99.90th=[ 265], 99.95th=[ 293], 00:41:29.301 | 99.99th=[ 1123] 00:41:29.301 bw ( KiB/s): min=16351, max=16351, per=31.70%, avg=16351.00, stdev= 0.00, samples=1 00:41:29.301 iops : min= 4087, max= 4087, avg=4087.00, stdev= 0.00, samples=1 00:41:29.301 lat (usec) : 100=3.89%, 250=90.90%, 500=5.05%, 750=0.02% 00:41:29.301 lat (msec) : 2=0.02%, 4=0.08%, 10=0.03% 00:41:29.301 cpu : usr=1.10%, sys=5.00%, ctx=5957, majf=0, minf=7 00:41:29.301 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:29.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:29.301 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:29.301 issued rwts: total=2885,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:29.301 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:29.301 job2: (groupid=0, jobs=1): err= 0: pid=92356: Mon Jul 22 14:54:48 2024 00:41:29.301 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:41:29.301 slat (nsec): min=6890, max=89644, avg=8745.00, stdev=4141.28 00:41:29.301 clat (usec): min=119, max=424, avg=158.60, stdev=14.70 00:41:29.302 lat (usec): min=127, max=433, avg=167.35, stdev=16.36 00:41:29.302 clat percentiles (usec): 00:41:29.302 | 1.00th=[ 137], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 147], 00:41:29.302 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 161], 00:41:29.302 | 70.00th=[ 163], 80.00th=[ 169], 90.00th=[ 178], 95.00th=[ 184], 00:41:29.302 | 99.00th=[ 200], 99.50th=[ 210], 99.90th=[ 225], 99.95th=[ 334], 00:41:29.302 | 99.99th=[ 424] 00:41:29.302 write: IOPS=3357, BW=13.1MiB/s (13.8MB/s)(13.1MiB/1001msec); 0 zone resets 00:41:29.302 slat (usec): min=10, max=137, avg=13.33, stdev= 6.71 00:41:29.302 clat (usec): min=90, max=401, avg=129.31, stdev=13.37 00:41:29.302 lat (usec): min=101, max=414, avg=142.64, stdev=16.42 00:41:29.302 clat percentiles (usec): 00:41:29.302 | 1.00th=[ 108], 5.00th=[ 113], 10.00th=[ 116], 20.00th=[ 120], 00:41:29.302 | 30.00th=[ 123], 40.00th=[ 125], 50.00th=[ 128], 60.00th=[ 131], 00:41:29.302 | 70.00th=[ 135], 80.00th=[ 139], 90.00th=[ 147], 95.00th=[ 153], 00:41:29.302 | 99.00th=[ 167], 99.50th=[ 174], 99.90th=[ 188], 99.95th=[ 235], 00:41:29.302 | 99.99th=[ 404] 00:41:29.302 bw ( KiB/s): min=13341, max=13341, per=25.87%, avg=13341.00, stdev= 0.00, samples=1 00:41:29.302 iops : min= 3335, max= 3335, avg=3335.00, stdev= 0.00, samples=1 00:41:29.302 lat (usec) : 100=0.05%, 250=99.91%, 500=0.05% 00:41:29.302 cpu : usr=1.00%, sys=5.50%, ctx=6433, majf=0, minf=9 00:41:29.302 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:29.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:29.302 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:29.302 issued rwts: total=3072,3361,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:29.302 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:29.302 job3: (groupid=0, jobs=1): err= 0: pid=92357: Mon Jul 22 14:54:48 2024 00:41:29.302 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:41:29.302 slat (usec): min=6, max=154, avg=10.38, stdev= 5.63 00:41:29.302 clat (usec): min=122, max=1751, avg=153.25, stdev=33.05 00:41:29.302 lat (usec): min=132, max=1760, avg=163.64, stdev=34.10 00:41:29.302 clat percentiles (usec): 00:41:29.302 | 1.00th=[ 130], 5.00th=[ 135], 10.00th=[ 139], 20.00th=[ 143], 00:41:29.302 | 30.00th=[ 145], 40.00th=[ 147], 50.00th=[ 151], 60.00th=[ 155], 00:41:29.302 | 70.00th=[ 157], 80.00th=[ 163], 90.00th=[ 169], 95.00th=[ 178], 00:41:29.302 | 99.00th=[ 196], 99.50th=[ 204], 99.90th=[ 351], 99.95th=[ 461], 00:41:29.302 | 99.99th=[ 1745] 00:41:29.302 write: IOPS=3398, BW=13.3MiB/s (13.9MB/s)(13.3MiB/1001msec); 0 zone resets 00:41:29.302 slat (usec): min=10, max=136, avg=17.03, stdev= 9.70 00:41:29.302 clat (usec): min=90, max=1569, avg=126.84, stdev=28.03 00:41:29.302 lat (usec): min=101, max=1580, avg=143.87, stdev=30.86 00:41:29.302 clat percentiles (usec): 00:41:29.302 | 1.00th=[ 104], 5.00th=[ 110], 10.00th=[ 113], 20.00th=[ 117], 00:41:29.302 | 30.00th=[ 120], 40.00th=[ 123], 50.00th=[ 125], 60.00th=[ 128], 00:41:29.302 | 70.00th=[ 133], 80.00th=[ 137], 90.00th=[ 143], 95.00th=[ 149], 00:41:29.302 | 99.00th=[ 165], 99.50th=[ 167], 99.90th=[ 215], 99.95th=[ 371], 00:41:29.302 | 99.99th=[ 1565] 00:41:29.302 bw ( KiB/s): min=13352, max=13352, per=25.89%, avg=13352.00, stdev= 0.00, samples=1 00:41:29.302 iops : min= 3338, max= 3338, avg=3338.00, stdev= 0.00, samples=1 00:41:29.302 lat (usec) : 100=0.14%, 250=99.71%, 500=0.12% 00:41:29.302 lat (msec) : 2=0.03% 00:41:29.302 cpu : usr=1.70%, sys=6.40%, ctx=6476, majf=0, minf=9 00:41:29.302 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:29.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:29.302 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:29.302 issued rwts: total=3072,3402,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:29.302 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:29.302 00:41:29.302 Run status group 0 (all jobs): 00:41:29.302 READ: bw=47.2MiB/s (49.5MB/s), 11.3MiB/s-12.0MiB/s (11.8MB/s-12.6MB/s), io=47.3MiB (49.5MB), run=1001-1001msec 00:41:29.302 WRITE: bw=50.4MiB/s (52.8MB/s), 12.0MiB/s-13.3MiB/s (12.6MB/s-13.9MB/s), io=50.4MiB (52.9MB), run=1001-1001msec 00:41:29.302 00:41:29.302 Disk stats (read/write): 00:41:29.302 nvme0n1: ios=2617/3072, merge=0/0, ticks=422/404, in_queue=826, util=88.77% 00:41:29.302 nvme0n2: ios=2609/2801, merge=0/0, ticks=466/371, in_queue=837, util=88.29% 00:41:29.302 nvme0n3: ios=2608/3072, merge=0/0, ticks=432/412, in_queue=844, util=90.27% 00:41:29.302 nvme0n4: ios=2598/3072, merge=0/0, ticks=402/409, in_queue=811, util=89.83% 00:41:29.302 14:54:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:41:29.302 [global] 00:41:29.302 thread=1 00:41:29.302 invalidate=1 00:41:29.302 rw=randwrite 00:41:29.302 time_based=1 00:41:29.302 runtime=1 00:41:29.302 ioengine=libaio 00:41:29.302 direct=1 00:41:29.302 bs=4096 00:41:29.302 iodepth=1 00:41:29.302 norandommap=0 00:41:29.302 numjobs=1 00:41:29.302 00:41:29.302 verify_dump=1 00:41:29.302 verify_backlog=512 00:41:29.302 verify_state_save=0 00:41:29.302 do_verify=1 00:41:29.302 verify=crc32c-intel 00:41:29.302 [job0] 00:41:29.302 filename=/dev/nvme0n1 00:41:29.302 [job1] 00:41:29.302 filename=/dev/nvme0n2 00:41:29.302 [job2] 00:41:29.302 filename=/dev/nvme0n3 00:41:29.302 [job3] 00:41:29.302 filename=/dev/nvme0n4 00:41:29.302 Could not set queue depth (nvme0n1) 00:41:29.302 Could not set queue depth (nvme0n2) 00:41:29.302 Could not set queue depth (nvme0n3) 00:41:29.302 Could not set queue depth (nvme0n4) 00:41:29.302 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:29.302 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:29.302 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:29.302 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:29.302 fio-3.35 00:41:29.302 Starting 4 threads 00:41:30.678 00:41:30.678 job0: (groupid=0, jobs=1): err= 0: pid=92410: Mon Jul 22 14:54:49 2024 00:41:30.678 read: IOPS=2048, BW=8192KiB/s (8389kB/s)(8192KiB/1000msec) 00:41:30.678 slat (nsec): min=6790, max=85952, avg=9662.59, stdev=4658.31 00:41:30.678 clat (usec): min=117, max=434, avg=239.84, stdev=19.92 00:41:30.678 lat (usec): min=125, max=443, avg=249.50, stdev=19.73 00:41:30.678 clat percentiles (usec): 00:41:30.678 | 1.00th=[ 202], 5.00th=[ 215], 10.00th=[ 219], 20.00th=[ 225], 00:41:30.678 | 30.00th=[ 229], 40.00th=[ 235], 50.00th=[ 239], 60.00th=[ 245], 00:41:30.678 | 70.00th=[ 249], 80.00th=[ 253], 90.00th=[ 265], 95.00th=[ 273], 00:41:30.678 | 99.00th=[ 289], 99.50th=[ 297], 99.90th=[ 375], 99.95th=[ 416], 00:41:30.678 | 99.99th=[ 437] 00:41:30.678 write: IOPS=2351, BW=9404KiB/s (9630kB/s)(9404KiB/1000msec); 0 zone resets 00:41:30.678 slat (usec): min=10, max=100, avg=14.61, stdev= 5.85 00:41:30.678 clat (usec): min=89, max=1692, avg=191.18, stdev=34.89 00:41:30.678 lat (usec): min=111, max=1705, avg=205.79, stdev=35.40 00:41:30.678 clat percentiles (usec): 00:41:30.678 | 1.00th=[ 161], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 178], 00:41:30.678 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 194], 00:41:30.678 | 70.00th=[ 198], 80.00th=[ 204], 90.00th=[ 210], 95.00th=[ 219], 00:41:30.678 | 99.00th=[ 235], 99.50th=[ 247], 99.90th=[ 258], 99.95th=[ 269], 00:41:30.678 | 99.99th=[ 1696] 00:41:30.678 bw ( KiB/s): min= 9080, max= 9080, per=23.13%, avg=9080.00, stdev= 0.00, samples=1 00:41:30.678 iops : min= 2270, max= 2270, avg=2270.00, stdev= 0.00, samples=1 00:41:30.678 lat (usec) : 100=0.07%, 250=87.66%, 500=12.25% 00:41:30.678 lat (msec) : 2=0.02% 00:41:30.678 cpu : usr=0.70%, sys=4.20%, ctx=4399, majf=0, minf=14 00:41:30.678 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:30.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.678 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.678 issued rwts: total=2048,2351,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:30.678 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:30.678 job1: (groupid=0, jobs=1): err= 0: pid=92411: Mon Jul 22 14:54:49 2024 00:41:30.678 read: IOPS=2067, BW=8272KiB/s (8470kB/s)(8280KiB/1001msec) 00:41:30.678 slat (nsec): min=4589, max=25476, avg=7378.80, stdev=2220.83 00:41:30.678 clat (usec): min=121, max=610, avg=232.32, stdev=33.46 00:41:30.678 lat (usec): min=126, max=616, avg=239.69, stdev=34.11 00:41:30.678 clat percentiles (usec): 00:41:30.678 | 1.00th=[ 143], 5.00th=[ 192], 10.00th=[ 200], 20.00th=[ 210], 00:41:30.678 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 235], 00:41:30.678 | 70.00th=[ 241], 80.00th=[ 251], 90.00th=[ 277], 95.00th=[ 302], 00:41:30.678 | 99.00th=[ 322], 99.50th=[ 330], 99.90th=[ 388], 99.95th=[ 523], 00:41:30.678 | 99.99th=[ 611] 00:41:30.678 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:41:30.678 slat (usec): min=6, max=156, avg=13.23, stdev= 7.56 00:41:30.678 clat (usec): min=85, max=7665, avg=181.76, stdev=169.78 00:41:30.678 lat (usec): min=96, max=7677, avg=194.99, stdev=170.13 00:41:30.678 clat percentiles (usec): 00:41:30.678 | 1.00th=[ 103], 5.00th=[ 114], 10.00th=[ 143], 20.00th=[ 157], 00:41:30.678 | 30.00th=[ 163], 40.00th=[ 174], 50.00th=[ 180], 60.00th=[ 186], 00:41:30.678 | 70.00th=[ 194], 80.00th=[ 200], 90.00th=[ 210], 95.00th=[ 219], 00:41:30.678 | 99.00th=[ 249], 99.50th=[ 269], 99.90th=[ 2376], 99.95th=[ 3392], 00:41:30.678 | 99.99th=[ 7635] 00:41:30.678 bw ( KiB/s): min= 8936, max= 8936, per=22.76%, avg=8936.00, stdev= 0.00, samples=1 00:41:30.678 iops : min= 2234, max= 2234, avg=2234.00, stdev= 0.00, samples=1 00:41:30.678 lat (usec) : 100=0.30%, 250=90.13%, 500=9.44%, 750=0.04%, 1000=0.02% 00:41:30.678 lat (msec) : 4=0.04%, 10=0.02% 00:41:30.678 cpu : usr=0.80%, sys=3.90%, ctx=4631, majf=0, minf=7 00:41:30.678 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:30.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.678 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.678 issued rwts: total=2070,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:30.678 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:30.678 job2: (groupid=0, jobs=1): err= 0: pid=92412: Mon Jul 22 14:54:49 2024 00:41:30.678 read: IOPS=2167, BW=8671KiB/s (8879kB/s)(8680KiB/1001msec) 00:41:30.678 slat (nsec): min=5487, max=24712, avg=7834.82, stdev=1608.09 00:41:30.678 clat (usec): min=131, max=611, avg=229.35, stdev=30.83 00:41:30.678 lat (usec): min=137, max=619, avg=237.19, stdev=31.07 00:41:30.678 clat percentiles (usec): 00:41:30.678 | 1.00th=[ 145], 5.00th=[ 190], 10.00th=[ 198], 20.00th=[ 208], 00:41:30.678 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 233], 00:41:30.678 | 70.00th=[ 241], 80.00th=[ 249], 90.00th=[ 269], 95.00th=[ 285], 00:41:30.678 | 99.00th=[ 314], 99.50th=[ 322], 99.90th=[ 338], 99.95th=[ 404], 00:41:30.678 | 99.99th=[ 611] 00:41:30.678 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:41:30.678 slat (usec): min=6, max=119, avg=12.96, stdev= 7.05 00:41:30.678 clat (usec): min=91, max=670, avg=174.88, stdev=30.93 00:41:30.678 lat (usec): min=102, max=683, avg=187.84, stdev=31.86 00:41:30.678 clat percentiles (usec): 00:41:30.678 | 1.00th=[ 103], 5.00th=[ 118], 10.00th=[ 133], 20.00th=[ 153], 00:41:30.678 | 30.00th=[ 163], 40.00th=[ 169], 50.00th=[ 178], 60.00th=[ 186], 00:41:30.678 | 70.00th=[ 192], 80.00th=[ 200], 90.00th=[ 208], 95.00th=[ 217], 00:41:30.678 | 99.00th=[ 241], 99.50th=[ 253], 99.90th=[ 277], 99.95th=[ 277], 00:41:30.678 | 99.99th=[ 668] 00:41:30.678 bw ( KiB/s): min=10248, max=10248, per=26.10%, avg=10248.00, stdev= 0.00, samples=1 00:41:30.678 iops : min= 2562, max= 2562, avg=2562.00, stdev= 0.00, samples=1 00:41:30.678 lat (usec) : 100=0.34%, 250=90.59%, 500=9.03%, 750=0.04% 00:41:30.678 cpu : usr=0.80%, sys=4.00%, ctx=4733, majf=0, minf=9 00:41:30.678 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:30.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.678 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.678 issued rwts: total=2170,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:30.678 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:30.678 job3: (groupid=0, jobs=1): err= 0: pid=92413: Mon Jul 22 14:54:49 2024 00:41:30.679 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:41:30.679 slat (nsec): min=7006, max=63230, avg=8262.47, stdev=1824.84 00:41:30.679 clat (usec): min=138, max=603, avg=241.86, stdev=19.55 00:41:30.679 lat (usec): min=146, max=610, avg=250.13, stdev=19.92 00:41:30.679 clat percentiles (usec): 00:41:30.679 | 1.00th=[ 210], 5.00th=[ 219], 10.00th=[ 221], 20.00th=[ 227], 00:41:30.679 | 30.00th=[ 231], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 245], 00:41:30.679 | 70.00th=[ 251], 80.00th=[ 255], 90.00th=[ 265], 95.00th=[ 273], 00:41:30.679 | 99.00th=[ 285], 99.50th=[ 302], 99.90th=[ 351], 99.95th=[ 355], 00:41:30.679 | 99.99th=[ 603] 00:41:30.679 write: IOPS=2351, BW=9407KiB/s (9632kB/s)(9416KiB/1001msec); 0 zone resets 00:41:30.679 slat (usec): min=10, max=119, avg=13.76, stdev= 6.67 00:41:30.679 clat (usec): min=90, max=466, avg=191.54, stdev=18.33 00:41:30.679 lat (usec): min=104, max=479, avg=205.31, stdev=18.55 00:41:30.679 clat percentiles (usec): 00:41:30.679 | 1.00th=[ 147], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 178], 00:41:30.679 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 196], 00:41:30.679 | 70.00th=[ 200], 80.00th=[ 206], 90.00th=[ 212], 95.00th=[ 221], 00:41:30.679 | 99.00th=[ 239], 99.50th=[ 243], 99.90th=[ 260], 99.95th=[ 262], 00:41:30.679 | 99.99th=[ 465] 00:41:30.679 bw ( KiB/s): min= 9120, max= 9120, per=23.23%, avg=9120.00, stdev= 0.00, samples=1 00:41:30.679 iops : min= 2280, max= 2280, avg=2280.00, stdev= 0.00, samples=1 00:41:30.679 lat (usec) : 100=0.02%, 250=85.98%, 500=13.97%, 750=0.02% 00:41:30.679 cpu : usr=0.90%, sys=3.50%, ctx=4402, majf=0, minf=15 00:41:30.679 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:30.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:30.679 issued rwts: total=2048,2354,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:30.679 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:30.679 00:41:30.679 Run status group 0 (all jobs): 00:41:30.679 READ: bw=32.5MiB/s (34.1MB/s), 8184KiB/s-8671KiB/s (8380kB/s-8879kB/s), io=32.6MiB (34.1MB), run=1000-1001msec 00:41:30.679 WRITE: bw=38.3MiB/s (40.2MB/s), 9404KiB/s-9.99MiB/s (9630kB/s-10.5MB/s), io=38.4MiB (40.2MB), run=1000-1001msec 00:41:30.679 00:41:30.679 Disk stats (read/write): 00:41:30.679 nvme0n1: ios=1881/2048, merge=0/0, ticks=463/407, in_queue=870, util=89.58% 00:41:30.679 nvme0n2: ios=2021/2048, merge=0/0, ticks=474/376, in_queue=850, util=89.12% 00:41:30.679 nvme0n3: ios=2087/2090, merge=0/0, ticks=495/371, in_queue=866, util=90.64% 00:41:30.679 nvme0n4: ios=1863/2048, merge=0/0, ticks=492/395, in_queue=887, util=91.12% 00:41:30.679 14:54:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:41:30.679 [global] 00:41:30.679 thread=1 00:41:30.679 invalidate=1 00:41:30.679 rw=write 00:41:30.679 time_based=1 00:41:30.679 runtime=1 00:41:30.679 ioengine=libaio 00:41:30.679 direct=1 00:41:30.679 bs=4096 00:41:30.679 iodepth=128 00:41:30.679 norandommap=0 00:41:30.679 numjobs=1 00:41:30.679 00:41:30.679 verify_dump=1 00:41:30.679 verify_backlog=512 00:41:30.679 verify_state_save=0 00:41:30.679 do_verify=1 00:41:30.679 verify=crc32c-intel 00:41:30.679 [job0] 00:41:30.679 filename=/dev/nvme0n1 00:41:30.679 [job1] 00:41:30.679 filename=/dev/nvme0n2 00:41:30.679 [job2] 00:41:30.679 filename=/dev/nvme0n3 00:41:30.679 [job3] 00:41:30.679 filename=/dev/nvme0n4 00:41:30.679 Could not set queue depth (nvme0n1) 00:41:30.679 Could not set queue depth (nvme0n2) 00:41:30.679 Could not set queue depth (nvme0n3) 00:41:30.679 Could not set queue depth (nvme0n4) 00:41:30.679 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:30.679 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:30.679 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:30.679 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:30.679 fio-3.35 00:41:30.679 Starting 4 threads 00:41:32.058 00:41:32.058 job0: (groupid=0, jobs=1): err= 0: pid=92466: Mon Jul 22 14:54:51 2024 00:41:32.058 read: IOPS=2888, BW=11.3MiB/s (11.8MB/s)(11.3MiB/1005msec) 00:41:32.058 slat (usec): min=5, max=11115, avg=192.69, stdev=820.66 00:41:32.058 clat (usec): min=571, max=46639, avg=24155.09, stdev=8618.03 00:41:32.058 lat (usec): min=11206, max=46656, avg=24347.78, stdev=8645.29 00:41:32.058 clat percentiles (usec): 00:41:32.058 | 1.00th=[11469], 5.00th=[15664], 10.00th=[16450], 20.00th=[17695], 00:41:32.058 | 30.00th=[18220], 40.00th=[18482], 50.00th=[19268], 60.00th=[23200], 00:41:32.058 | 70.00th=[27657], 80.00th=[32637], 90.00th=[39060], 95.00th=[41681], 00:41:32.058 | 99.00th=[43779], 99.50th=[46400], 99.90th=[46400], 99.95th=[46400], 00:41:32.058 | 99.99th=[46400] 00:41:32.058 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:41:32.058 slat (usec): min=22, max=6829, avg=134.98, stdev=602.18 00:41:32.058 clat (usec): min=9924, max=43960, avg=18391.37, stdev=6430.13 00:41:32.058 lat (usec): min=12226, max=44007, avg=18526.35, stdev=6439.50 00:41:32.058 clat percentiles (usec): 00:41:32.058 | 1.00th=[11731], 5.00th=[12387], 10.00th=[12649], 20.00th=[13042], 00:41:32.058 | 30.00th=[13960], 40.00th=[14222], 50.00th=[16581], 60.00th=[18220], 00:41:32.058 | 70.00th=[20317], 80.00th=[23462], 90.00th=[26870], 95.00th=[31851], 00:41:32.058 | 99.00th=[38536], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:41:32.058 | 99.99th=[43779] 00:41:32.058 bw ( KiB/s): min=10568, max=14008, per=22.32%, avg=12288.00, stdev=2432.45, samples=2 00:41:32.058 iops : min= 2642, max= 3502, avg=3072.00, stdev=608.11, samples=2 00:41:32.058 lat (usec) : 750=0.02% 00:41:32.058 lat (msec) : 10=0.02%, 20=60.13%, 50=39.83% 00:41:32.058 cpu : usr=3.69%, sys=11.55%, ctx=270, majf=0, minf=9 00:41:32.058 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:41:32.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:32.058 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:32.058 issued rwts: total=2903,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:32.058 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:32.058 job1: (groupid=0, jobs=1): err= 0: pid=92467: Mon Jul 22 14:54:51 2024 00:41:32.059 read: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec) 00:41:32.059 slat (usec): min=4, max=9628, avg=187.29, stdev=712.68 00:41:32.059 clat (usec): min=13473, max=49264, avg=23010.04, stdev=5550.02 00:41:32.059 lat (usec): min=13750, max=49287, avg=23197.33, stdev=5591.56 00:41:32.059 clat percentiles (usec): 00:41:32.059 | 1.00th=[14091], 5.00th=[16319], 10.00th=[17171], 20.00th=[18744], 00:41:32.059 | 30.00th=[20055], 40.00th=[20579], 50.00th=[21627], 60.00th=[23200], 00:41:32.059 | 70.00th=[25297], 80.00th=[27132], 90.00th=[28967], 95.00th=[33817], 00:41:32.059 | 99.00th=[41681], 99.50th=[44827], 99.90th=[47449], 99.95th=[49021], 00:41:32.059 | 99.99th=[49021] 00:41:32.059 write: IOPS=2897, BW=11.3MiB/s (11.9MB/s)(11.4MiB/1006msec); 0 zone resets 00:41:32.059 slat (usec): min=7, max=6340, avg=171.06, stdev=580.67 00:41:32.059 clat (usec): min=480, max=47666, avg=23205.62, stdev=7956.43 00:41:32.059 lat (usec): min=5643, max=47701, avg=23376.68, stdev=7984.64 00:41:32.059 clat percentiles (usec): 00:41:32.059 | 1.00th=[ 6718], 5.00th=[14746], 10.00th=[16188], 20.00th=[18220], 00:41:32.059 | 30.00th=[19530], 40.00th=[20579], 50.00th=[20841], 60.00th=[21365], 00:41:32.059 | 70.00th=[22938], 80.00th=[26084], 90.00th=[36439], 95.00th=[40633], 00:41:32.059 | 99.00th=[45351], 99.50th=[45876], 99.90th=[46924], 99.95th=[47449], 00:41:32.059 | 99.99th=[47449] 00:41:32.059 bw ( KiB/s): min=10008, max=12288, per=20.25%, avg=11148.00, stdev=1612.20, samples=2 00:41:32.059 iops : min= 2502, max= 3072, avg=2787.00, stdev=403.05, samples=2 00:41:32.059 lat (usec) : 500=0.02% 00:41:32.059 lat (msec) : 10=0.60%, 20=32.51%, 50=66.87% 00:41:32.059 cpu : usr=2.69%, sys=11.54%, ctx=995, majf=0, minf=3 00:41:32.059 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:41:32.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:32.059 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:32.059 issued rwts: total=2560,2915,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:32.059 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:32.059 job2: (groupid=0, jobs=1): err= 0: pid=92468: Mon Jul 22 14:54:51 2024 00:41:32.059 read: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec) 00:41:32.059 slat (usec): min=3, max=6902, avg=175.18, stdev=652.11 00:41:32.059 clat (usec): min=12117, max=47584, avg=23547.56, stdev=5633.56 00:41:32.059 lat (usec): min=14406, max=47618, avg=23722.74, stdev=5646.12 00:41:32.059 clat percentiles (usec): 00:41:32.059 | 1.00th=[14746], 5.00th=[16057], 10.00th=[16581], 20.00th=[17957], 00:41:32.059 | 30.00th=[19006], 40.00th=[22414], 50.00th=[23987], 60.00th=[25035], 00:41:32.059 | 70.00th=[26608], 80.00th=[27657], 90.00th=[30016], 95.00th=[31851], 00:41:32.059 | 99.00th=[42206], 99.50th=[44827], 99.90th=[47449], 99.95th=[47449], 00:41:32.059 | 99.99th=[47449] 00:41:32.059 write: IOPS=2733, BW=10.7MiB/s (11.2MB/s)(10.8MiB/1007msec); 0 zone resets 00:41:32.059 slat (usec): min=7, max=7808, avg=193.00, stdev=636.00 00:41:32.059 clat (usec): min=637, max=46367, avg=24065.23, stdev=7238.11 00:41:32.059 lat (usec): min=4617, max=46405, avg=24258.24, stdev=7272.76 00:41:32.059 clat percentiles (usec): 00:41:32.059 | 1.00th=[ 5604], 5.00th=[15926], 10.00th=[17695], 20.00th=[18744], 00:41:32.059 | 30.00th=[20317], 40.00th=[20841], 50.00th=[21627], 60.00th=[23462], 00:41:32.059 | 70.00th=[25822], 80.00th=[30016], 90.00th=[35914], 95.00th=[36963], 00:41:32.059 | 99.00th=[44827], 99.50th=[45876], 99.90th=[45876], 99.95th=[45876], 00:41:32.059 | 99.99th=[46400] 00:41:32.059 bw ( KiB/s): min= 8712, max=12288, per=19.07%, avg=10500.00, stdev=2528.61, samples=2 00:41:32.059 iops : min= 2178, max= 3072, avg=2625.00, stdev=632.15, samples=2 00:41:32.059 lat (usec) : 750=0.02% 00:41:32.059 lat (msec) : 10=0.94%, 20=28.72%, 50=70.32% 00:41:32.059 cpu : usr=1.59%, sys=11.13%, ctx=747, majf=0, minf=10 00:41:32.059 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:41:32.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:32.059 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:32.059 issued rwts: total=2560,2753,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:32.059 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:32.059 job3: (groupid=0, jobs=1): err= 0: pid=92469: Mon Jul 22 14:54:51 2024 00:41:32.059 read: IOPS=4989, BW=19.5MiB/s (20.4MB/s)(19.5MiB/1002msec) 00:41:32.059 slat (usec): min=5, max=3807, avg=90.62, stdev=334.65 00:41:32.059 clat (usec): min=1390, max=24321, avg=12296.54, stdev=3824.76 00:41:32.059 lat (usec): min=1407, max=24351, avg=12387.17, stdev=3846.58 00:41:32.059 clat percentiles (usec): 00:41:32.059 | 1.00th=[ 4883], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9896], 00:41:32.059 | 30.00th=[10159], 40.00th=[10552], 50.00th=[10945], 60.00th=[11207], 00:41:32.059 | 70.00th=[11600], 80.00th=[16450], 90.00th=[19268], 95.00th=[20579], 00:41:32.059 | 99.00th=[22414], 99.50th=[22938], 99.90th=[23725], 99.95th=[23987], 00:41:32.059 | 99.99th=[24249] 00:41:32.059 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:41:32.059 slat (usec): min=7, max=4993, avg=96.88, stdev=339.50 00:41:32.059 clat (usec): min=8189, max=22867, avg=12595.24, stdev=4113.22 00:41:32.059 lat (usec): min=8273, max=22901, avg=12692.12, stdev=4143.30 00:41:32.059 clat percentiles (usec): 00:41:32.059 | 1.00th=[ 8848], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9765], 00:41:32.059 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10421], 60.00th=[10945], 00:41:32.059 | 70.00th=[11863], 80.00th=[18220], 90.00th=[20055], 95.00th=[20841], 00:41:32.059 | 99.00th=[21627], 99.50th=[21890], 99.90th=[22938], 99.95th=[22938], 00:41:32.059 | 99.99th=[22938] 00:41:32.059 bw ( KiB/s): min=16416, max=24576, per=37.23%, avg=20496.00, stdev=5769.99, samples=2 00:41:32.059 iops : min= 4104, max= 6144, avg=5124.00, stdev=1442.50, samples=2 00:41:32.059 lat (msec) : 2=0.20%, 4=0.01%, 10=28.13%, 20=62.27%, 50=9.40% 00:41:32.059 cpu : usr=5.79%, sys=19.78%, ctx=1009, majf=0, minf=9 00:41:32.059 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:41:32.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:32.059 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:32.059 issued rwts: total=4999,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:32.059 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:32.059 00:41:32.059 Run status group 0 (all jobs): 00:41:32.059 READ: bw=50.5MiB/s (53.0MB/s), 9.93MiB/s-19.5MiB/s (10.4MB/s-20.4MB/s), io=50.9MiB (53.3MB), run=1002-1007msec 00:41:32.059 WRITE: bw=53.8MiB/s (56.4MB/s), 10.7MiB/s-20.0MiB/s (11.2MB/s-20.9MB/s), io=54.1MiB (56.8MB), run=1002-1007msec 00:41:32.059 00:41:32.059 Disk stats (read/write): 00:41:32.059 nvme0n1: ios=2610/2966, merge=0/0, ticks=14325/10393, in_queue=24718, util=89.28% 00:41:32.059 nvme0n2: ios=2292/2560, merge=0/0, ticks=12363/12530, in_queue=24893, util=89.02% 00:41:32.059 nvme0n3: ios=2140/2560, merge=0/0, ticks=11097/14490, in_queue=25587, util=89.71% 00:41:32.059 nvme0n4: ios=4126/4608, merge=0/0, ticks=11484/11559, in_queue=23043, util=90.09% 00:41:32.059 14:54:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:41:32.059 [global] 00:41:32.059 thread=1 00:41:32.059 invalidate=1 00:41:32.059 rw=randwrite 00:41:32.059 time_based=1 00:41:32.059 runtime=1 00:41:32.059 ioengine=libaio 00:41:32.060 direct=1 00:41:32.060 bs=4096 00:41:32.060 iodepth=128 00:41:32.060 norandommap=0 00:41:32.060 numjobs=1 00:41:32.060 00:41:32.060 verify_dump=1 00:41:32.060 verify_backlog=512 00:41:32.060 verify_state_save=0 00:41:32.060 do_verify=1 00:41:32.060 verify=crc32c-intel 00:41:32.060 [job0] 00:41:32.060 filename=/dev/nvme0n1 00:41:32.060 [job1] 00:41:32.060 filename=/dev/nvme0n2 00:41:32.060 [job2] 00:41:32.060 filename=/dev/nvme0n3 00:41:32.060 [job3] 00:41:32.060 filename=/dev/nvme0n4 00:41:32.060 Could not set queue depth (nvme0n1) 00:41:32.060 Could not set queue depth (nvme0n2) 00:41:32.060 Could not set queue depth (nvme0n3) 00:41:32.060 Could not set queue depth (nvme0n4) 00:41:32.060 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:32.060 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:32.060 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:32.060 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:32.060 fio-3.35 00:41:32.060 Starting 4 threads 00:41:33.441 00:41:33.441 job0: (groupid=0, jobs=1): err= 0: pid=92533: Mon Jul 22 14:54:52 2024 00:41:33.442 read: IOPS=2353, BW=9414KiB/s (9640kB/s)(9452KiB/1004msec) 00:41:33.442 slat (usec): min=7, max=7705, avg=131.22, stdev=655.29 00:41:33.442 clat (usec): min=2769, max=27224, avg=15794.79, stdev=2731.27 00:41:33.442 lat (usec): min=5544, max=27242, avg=15926.01, stdev=2790.60 00:41:33.442 clat percentiles (usec): 00:41:33.442 | 1.00th=[ 8586], 5.00th=[12387], 10.00th=[13435], 20.00th=[14353], 00:41:33.442 | 30.00th=[14615], 40.00th=[14877], 50.00th=[15270], 60.00th=[16188], 00:41:33.442 | 70.00th=[16450], 80.00th=[16909], 90.00th=[18220], 95.00th=[21365], 00:41:33.442 | 99.00th=[25297], 99.50th=[26608], 99.90th=[27132], 99.95th=[27132], 00:41:33.442 | 99.99th=[27132] 00:41:33.442 write: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec); 0 zone resets 00:41:33.442 slat (usec): min=22, max=27464, avg=260.14, stdev=1283.97 00:41:33.442 clat (usec): min=12381, max=75027, avg=34447.08, stdev=16500.49 00:41:33.442 lat (usec): min=12414, max=75096, avg=34707.23, stdev=16577.50 00:41:33.442 clat percentiles (usec): 00:41:33.442 | 1.00th=[13698], 5.00th=[14877], 10.00th=[19530], 20.00th=[23200], 00:41:33.442 | 30.00th=[24249], 40.00th=[25035], 50.00th=[25560], 60.00th=[32113], 00:41:33.442 | 70.00th=[35390], 80.00th=[51119], 90.00th=[65274], 95.00th=[68682], 00:41:33.442 | 99.00th=[71828], 99.50th=[71828], 99.90th=[71828], 99.95th=[71828], 00:41:33.442 | 99.99th=[74974] 00:41:33.442 bw ( KiB/s): min= 9803, max=10696, per=15.55%, avg=10249.50, stdev=631.45, samples=2 00:41:33.442 iops : min= 2450, max= 2674, avg=2562.00, stdev=158.39, samples=2 00:41:33.442 lat (msec) : 4=0.02%, 10=1.28%, 20=49.01%, 50=38.88%, 100=10.81% 00:41:33.442 cpu : usr=2.89%, sys=9.97%, ctx=391, majf=0, minf=13 00:41:33.442 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:41:33.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.442 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:33.442 issued rwts: total=2363,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:33.442 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:33.442 job1: (groupid=0, jobs=1): err= 0: pid=92534: Mon Jul 22 14:54:52 2024 00:41:33.442 read: IOPS=5570, BW=21.8MiB/s (22.8MB/s)(22.0MiB/1011msec) 00:41:33.442 slat (usec): min=7, max=10191, avg=86.72, stdev=529.06 00:41:33.442 clat (usec): min=5137, max=21987, avg=11777.21, stdev=2333.22 00:41:33.442 lat (usec): min=5155, max=22009, avg=11863.94, stdev=2364.63 00:41:33.442 clat percentiles (usec): 00:41:33.442 | 1.00th=[ 8225], 5.00th=[ 9110], 10.00th=[ 9634], 20.00th=[10028], 00:41:33.442 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11338], 60.00th=[11600], 00:41:33.442 | 70.00th=[12256], 80.00th=[13173], 90.00th=[14353], 95.00th=[16712], 00:41:33.442 | 99.00th=[20317], 99.50th=[21103], 99.90th=[21890], 99.95th=[21890], 00:41:33.442 | 99.99th=[21890] 00:41:33.442 write: IOPS=5853, BW=22.9MiB/s (24.0MB/s)(23.1MiB/1011msec); 0 zone resets 00:41:33.442 slat (usec): min=11, max=6342, avg=77.26, stdev=395.09 00:41:33.442 clat (usec): min=4185, max=21897, avg=10423.00, stdev=1974.61 00:41:33.442 lat (usec): min=4355, max=21910, avg=10500.26, stdev=2016.08 00:41:33.442 clat percentiles (usec): 00:41:33.442 | 1.00th=[ 4948], 5.00th=[ 6194], 10.00th=[ 8455], 20.00th=[ 9503], 00:41:33.442 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10683], 60.00th=[10814], 00:41:33.442 | 70.00th=[11207], 80.00th=[11731], 90.00th=[12125], 95.00th=[12387], 00:41:33.442 | 99.00th=[17171], 99.50th=[19792], 99.90th=[21365], 99.95th=[21627], 00:41:33.442 | 99.99th=[21890] 00:41:33.442 bw ( KiB/s): min=21744, max=24625, per=35.19%, avg=23184.50, stdev=2037.17, samples=2 00:41:33.442 iops : min= 5436, max= 6156, avg=5796.00, stdev=509.12, samples=2 00:41:33.442 lat (msec) : 10=25.81%, 20=73.37%, 50=0.82% 00:41:33.442 cpu : usr=6.04%, sys=20.30%, ctx=611, majf=0, minf=5 00:41:33.442 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:41:33.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.442 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:33.442 issued rwts: total=5632,5918,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:33.442 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:33.442 job2: (groupid=0, jobs=1): err= 0: pid=92535: Mon Jul 22 14:54:52 2024 00:41:33.442 read: IOPS=2614, BW=10.2MiB/s (10.7MB/s)(10.3MiB/1012msec) 00:41:33.442 slat (usec): min=7, max=13266, avg=154.70, stdev=875.57 00:41:33.442 clat (usec): min=6781, max=50968, avg=17550.03, stdev=6615.04 00:41:33.442 lat (usec): min=6799, max=50989, avg=17704.73, stdev=6684.76 00:41:33.442 clat percentiles (usec): 00:41:33.442 | 1.00th=[ 7046], 5.00th=[11731], 10.00th=[11863], 20.00th=[13173], 00:41:33.442 | 30.00th=[13698], 40.00th=[14353], 50.00th=[15270], 60.00th=[17433], 00:41:33.442 | 70.00th=[17957], 80.00th=[19268], 90.00th=[27132], 95.00th=[32375], 00:41:33.442 | 99.00th=[41157], 99.50th=[43779], 99.90th=[51119], 99.95th=[51119], 00:41:33.442 | 99.99th=[51119] 00:41:33.442 write: IOPS=3035, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1012msec); 0 zone resets 00:41:33.442 slat (usec): min=10, max=12429, avg=183.70, stdev=708.59 00:41:33.442 clat (usec): min=4492, max=57983, avg=26605.76, stdev=12568.95 00:41:33.442 lat (usec): min=4575, max=58002, avg=26789.46, stdev=12649.96 00:41:33.442 clat percentiles (usec): 00:41:33.442 | 1.00th=[ 6390], 5.00th=[ 9896], 10.00th=[11600], 20.00th=[15270], 00:41:33.442 | 30.00th=[17433], 40.00th=[23725], 50.00th=[24773], 60.00th=[25560], 00:41:33.442 | 70.00th=[31327], 80.00th=[39060], 90.00th=[45876], 95.00th=[50070], 00:41:33.442 | 99.00th=[55837], 99.50th=[56361], 99.90th=[57934], 99.95th=[57934], 00:41:33.442 | 99.99th=[57934] 00:41:33.442 bw ( KiB/s): min=11952, max=12312, per=18.41%, avg=12132.00, stdev=254.56, samples=2 00:41:33.442 iops : min= 2988, max= 3078, avg=3033.00, stdev=63.64, samples=2 00:41:33.442 lat (msec) : 10=3.50%, 20=51.47%, 50=42.11%, 100=2.92% 00:41:33.442 cpu : usr=3.66%, sys=9.00%, ctx=463, majf=0, minf=12 00:41:33.442 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:41:33.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.442 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:33.442 issued rwts: total=2646,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:33.442 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:33.442 job3: (groupid=0, jobs=1): err= 0: pid=92536: Mon Jul 22 14:54:52 2024 00:41:33.442 read: IOPS=4720, BW=18.4MiB/s (19.3MB/s)(18.5MiB/1002msec) 00:41:33.442 slat (usec): min=5, max=7115, avg=96.51, stdev=413.84 00:41:33.442 clat (usec): min=609, max=17998, avg=13033.38, stdev=1523.26 00:41:33.442 lat (usec): min=2659, max=18021, avg=13129.89, stdev=1479.52 00:41:33.442 clat percentiles (usec): 00:41:33.442 | 1.00th=[ 6063], 5.00th=[10945], 10.00th=[11994], 20.00th=[12649], 00:41:33.442 | 30.00th=[12780], 40.00th=[12911], 50.00th=[13042], 60.00th=[13173], 00:41:33.442 | 70.00th=[13304], 80.00th=[13566], 90.00th=[14222], 95.00th=[15139], 00:41:33.442 | 99.00th=[17695], 99.50th=[17957], 99.90th=[17957], 99.95th=[17957], 00:41:33.442 | 99.99th=[17957] 00:41:33.442 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:41:33.442 slat (usec): min=7, max=4196, avg=97.36, stdev=388.17 00:41:33.442 clat (usec): min=9637, max=15561, avg=12683.20, stdev=1097.92 00:41:33.442 lat (usec): min=9721, max=16049, avg=12780.56, stdev=1105.86 00:41:33.442 clat percentiles (usec): 00:41:33.442 | 1.00th=[10552], 5.00th=[11076], 10.00th=[11469], 20.00th=[11863], 00:41:33.442 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12387], 60.00th=[12518], 00:41:33.442 | 70.00th=[13304], 80.00th=[13829], 90.00th=[14353], 95.00th=[14615], 00:41:33.442 | 99.00th=[15008], 99.50th=[15270], 99.90th=[15533], 99.95th=[15533], 00:41:33.442 | 99.99th=[15533] 00:41:33.442 bw ( KiB/s): min=20480, max=20480, per=31.08%, avg=20480.00, stdev= 0.00, samples=1 00:41:33.442 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:41:33.442 lat (usec) : 750=0.01% 00:41:33.442 lat (msec) : 4=0.32%, 10=0.87%, 20=98.79% 00:41:33.442 cpu : usr=5.00%, sys=17.88%, ctx=578, majf=0, minf=5 00:41:33.442 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:41:33.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:33.442 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:33.442 issued rwts: total=4730,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:33.442 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:33.442 00:41:33.442 Run status group 0 (all jobs): 00:41:33.442 READ: bw=59.3MiB/s (62.2MB/s), 9414KiB/s-21.8MiB/s (9640kB/s-22.8MB/s), io=60.0MiB (63.0MB), run=1002-1012msec 00:41:33.442 WRITE: bw=64.3MiB/s (67.5MB/s), 9.96MiB/s-22.9MiB/s (10.4MB/s-24.0MB/s), io=65.1MiB (68.3MB), run=1002-1012msec 00:41:33.442 00:41:33.442 Disk stats (read/write): 00:41:33.442 nvme0n1: ios=2098/2175, merge=0/0, ticks=15097/35986, in_queue=51083, util=90.18% 00:41:33.442 nvme0n2: ios=4956/5120, merge=0/0, ticks=52530/48535, in_queue=101065, util=89.84% 00:41:33.442 nvme0n3: ios=2467/2560, merge=0/0, ticks=39389/66863, in_queue=106252, util=90.55% 00:41:33.442 nvme0n4: ios=4133/4568, merge=0/0, ticks=12461/11462, in_queue=23923, util=90.83% 00:41:33.442 14:54:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:41:33.442 14:54:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=92550 00:41:33.442 14:54:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:41:33.442 14:54:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:41:33.442 [global] 00:41:33.442 thread=1 00:41:33.442 invalidate=1 00:41:33.442 rw=read 00:41:33.442 time_based=1 00:41:33.442 runtime=10 00:41:33.442 ioengine=libaio 00:41:33.442 direct=1 00:41:33.442 bs=4096 00:41:33.443 iodepth=1 00:41:33.443 norandommap=1 00:41:33.443 numjobs=1 00:41:33.443 00:41:33.443 [job0] 00:41:33.443 filename=/dev/nvme0n1 00:41:33.443 [job1] 00:41:33.443 filename=/dev/nvme0n2 00:41:33.443 [job2] 00:41:33.443 filename=/dev/nvme0n3 00:41:33.443 [job3] 00:41:33.443 filename=/dev/nvme0n4 00:41:33.443 Could not set queue depth (nvme0n1) 00:41:33.443 Could not set queue depth (nvme0n2) 00:41:33.443 Could not set queue depth (nvme0n3) 00:41:33.443 Could not set queue depth (nvme0n4) 00:41:33.443 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:33.443 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:33.443 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:33.443 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:33.443 fio-3.35 00:41:33.443 Starting 4 threads 00:41:36.734 14:54:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:41:36.734 fio: pid=92593, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:41:36.734 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=42045440, buflen=4096 00:41:36.734 14:54:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:41:36.734 fio: pid=92592, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:41:36.734 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=47144960, buflen=4096 00:41:36.734 14:54:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:36.734 14:54:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:41:36.993 fio: pid=92590, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:41:36.993 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=16715776, buflen=4096 00:41:36.993 14:54:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:36.993 14:54:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:41:37.253 fio: pid=92591, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:41:37.253 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=30113792, buflen=4096 00:41:37.253 00:41:37.253 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=92590: Mon Jul 22 14:54:56 2024 00:41:37.253 read: IOPS=6138, BW=24.0MiB/s (25.1MB/s)(79.9MiB/3334msec) 00:41:37.253 slat (usec): min=6, max=14795, avg=11.03, stdev=165.39 00:41:37.253 clat (usec): min=75, max=3488, avg=151.08, stdev=37.16 00:41:37.253 lat (usec): min=110, max=14999, avg=162.11, stdev=169.96 00:41:37.253 clat percentiles (usec): 00:41:37.253 | 1.00th=[ 123], 5.00th=[ 131], 10.00th=[ 135], 20.00th=[ 141], 00:41:37.253 | 30.00th=[ 145], 40.00th=[ 147], 50.00th=[ 151], 60.00th=[ 153], 00:41:37.253 | 70.00th=[ 157], 80.00th=[ 161], 90.00th=[ 167], 95.00th=[ 174], 00:41:37.253 | 99.00th=[ 186], 99.50th=[ 192], 99.90th=[ 241], 99.95th=[ 627], 00:41:37.253 | 99.99th=[ 1696] 00:41:37.253 bw ( KiB/s): min=22536, max=26232, per=33.51%, avg=24714.17, stdev=1298.92, samples=6 00:41:37.253 iops : min= 5634, max= 6558, avg=6178.50, stdev=324.72, samples=6 00:41:37.253 lat (usec) : 100=0.01%, 250=99.90%, 500=0.02%, 750=0.01%, 1000=0.01% 00:41:37.253 lat (msec) : 2=0.03%, 4=0.01% 00:41:37.253 cpu : usr=0.93%, sys=4.20%, ctx=20473, majf=0, minf=1 00:41:37.253 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:37.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:37.253 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:37.253 issued rwts: total=20466,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:37.253 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:37.253 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=92591: Mon Jul 22 14:54:56 2024 00:41:37.253 read: IOPS=6634, BW=25.9MiB/s (27.2MB/s)(92.7MiB/3578msec) 00:41:37.253 slat (usec): min=6, max=13877, avg=11.24, stdev=157.76 00:41:37.253 clat (usec): min=85, max=1622, avg=138.77, stdev=24.70 00:41:37.253 lat (usec): min=93, max=14040, avg=150.02, stdev=160.13 00:41:37.253 clat percentiles (usec): 00:41:37.253 | 1.00th=[ 100], 5.00th=[ 109], 10.00th=[ 120], 20.00th=[ 129], 00:41:37.253 | 30.00th=[ 133], 40.00th=[ 137], 50.00th=[ 141], 60.00th=[ 143], 00:41:37.253 | 70.00th=[ 145], 80.00th=[ 149], 90.00th=[ 155], 95.00th=[ 161], 00:41:37.253 | 99.00th=[ 176], 99.50th=[ 182], 99.90th=[ 206], 99.95th=[ 404], 00:41:37.253 | 99.99th=[ 1401] 00:41:37.253 bw ( KiB/s): min=24640, max=27832, per=35.72%, avg=26347.00, stdev=1168.52, samples=6 00:41:37.253 iops : min= 6160, max= 6958, avg=6586.67, stdev=292.11, samples=6 00:41:37.253 lat (usec) : 100=1.06%, 250=98.86%, 500=0.04%, 750=0.01%, 1000=0.01% 00:41:37.253 lat (msec) : 2=0.02% 00:41:37.253 cpu : usr=0.64%, sys=5.00%, ctx=23749, majf=0, minf=1 00:41:37.253 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:37.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:37.253 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:37.253 issued rwts: total=23737,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:37.253 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:37.253 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=92592: Mon Jul 22 14:54:56 2024 00:41:37.253 read: IOPS=3670, BW=14.3MiB/s (15.0MB/s)(45.0MiB/3136msec) 00:41:37.253 slat (usec): min=6, max=12565, avg=10.84, stdev=148.73 00:41:37.253 clat (usec): min=112, max=3835, avg=260.80, stdev=56.99 00:41:37.253 lat (usec): min=120, max=12720, avg=271.64, stdev=158.02 00:41:37.253 clat percentiles (usec): 00:41:37.253 | 1.00th=[ 141], 5.00th=[ 153], 10.00th=[ 172], 20.00th=[ 245], 00:41:37.253 | 30.00th=[ 258], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 277], 00:41:37.253 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 297], 95.00th=[ 306], 00:41:37.253 | 99.00th=[ 322], 99.50th=[ 334], 99.90th=[ 506], 99.95th=[ 783], 00:41:37.253 | 99.99th=[ 1598] 00:41:37.253 bw ( KiB/s): min=13792, max=15096, per=19.37%, avg=14289.83, stdev=507.96, samples=6 00:41:37.253 iops : min= 3448, max= 3774, avg=3572.33, stdev=127.02, samples=6 00:41:37.253 lat (usec) : 250=23.47%, 500=76.41%, 750=0.05%, 1000=0.03% 00:41:37.253 lat (msec) : 2=0.02%, 4=0.01% 00:41:37.253 cpu : usr=0.48%, sys=2.68%, ctx=11513, majf=0, minf=1 00:41:37.253 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:37.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:37.253 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:37.253 issued rwts: total=11511,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:37.253 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:37.253 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=92593: Mon Jul 22 14:54:56 2024 00:41:37.253 read: IOPS=3526, BW=13.8MiB/s (14.4MB/s)(40.1MiB/2911msec) 00:41:37.253 slat (nsec): min=7052, max=89693, avg=16151.50, stdev=5269.41 00:41:37.253 clat (usec): min=130, max=3726, avg=265.62, stdev=47.00 00:41:37.253 lat (usec): min=142, max=3747, avg=281.77, stdev=47.82 00:41:37.253 clat percentiles (usec): 00:41:37.253 | 1.00th=[ 225], 5.00th=[ 235], 10.00th=[ 241], 20.00th=[ 249], 00:41:37.253 | 30.00th=[ 255], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 269], 00:41:37.253 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 289], 95.00th=[ 297], 00:41:37.253 | 99.00th=[ 314], 99.50th=[ 322], 99.90th=[ 371], 99.95th=[ 799], 00:41:37.253 | 99.99th=[ 1745] 00:41:37.253 bw ( KiB/s): min=13616, max=15096, per=19.32%, avg=14251.80, stdev=602.14, samples=5 00:41:37.253 iops : min= 3404, max= 3774, avg=3562.80, stdev=150.58, samples=5 00:41:37.253 lat (usec) : 250=22.30%, 500=77.61%, 750=0.01%, 1000=0.04% 00:41:37.253 lat (msec) : 2=0.03%, 4=0.01% 00:41:37.253 cpu : usr=0.93%, sys=4.64%, ctx=10266, majf=0, minf=2 00:41:37.253 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:37.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:37.253 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:37.253 issued rwts: total=10266,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:37.253 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:37.253 00:41:37.253 Run status group 0 (all jobs): 00:41:37.253 READ: bw=72.0MiB/s (75.5MB/s), 13.8MiB/s-25.9MiB/s (14.4MB/s-27.2MB/s), io=258MiB (270MB), run=2911-3578msec 00:41:37.253 00:41:37.253 Disk stats (read/write): 00:41:37.253 nvme0n1: ios=19341/0, merge=0/0, ticks=2955/0, in_queue=2955, util=96.00% 00:41:37.253 nvme0n2: ios=22098/0, merge=0/0, ticks=3153/0, in_queue=3153, util=95.47% 00:41:37.253 nvme0n3: ios=11510/0, merge=0/0, ticks=3002/0, in_queue=3002, util=96.35% 00:41:37.253 nvme0n4: ios=10196/0, merge=0/0, ticks=2729/0, in_queue=2729, util=96.85% 00:41:37.253 14:54:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:37.253 14:54:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:41:37.512 14:54:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:37.512 14:54:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:41:37.770 14:54:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:37.770 14:54:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:41:37.770 14:54:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:37.770 14:54:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:41:38.028 14:54:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:38.029 14:54:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:41:38.287 14:54:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:41:38.287 14:54:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 92550 00:41:38.287 14:54:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:41:38.287 14:54:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:41:38.287 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:41:38.287 14:54:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:41:38.287 14:54:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:41:38.287 14:54:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:41:38.287 14:54:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:38.287 14:54:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:38.287 14:54:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:41:38.287 nvmf hotplug test: fio failed as expected 00:41:38.288 14:54:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:41:38.288 14:54:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:41:38.288 14:54:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:41:38.288 14:54:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:38.546 14:54:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:41:38.546 14:54:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:41:38.546 14:54:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:41:38.546 14:54:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:41:38.546 14:54:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:41:38.546 14:54:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:41:38.546 14:54:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:41:38.546 14:54:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:41:38.546 14:54:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:41:38.546 14:54:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:41:38.546 14:54:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:41:38.546 rmmod nvme_tcp 00:41:38.546 rmmod nvme_fabrics 00:41:38.546 rmmod nvme_keyring 00:41:38.546 14:54:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:41:38.546 14:54:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:41:38.546 14:54:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:41:38.546 14:54:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 92067 ']' 00:41:38.546 14:54:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 92067 00:41:38.546 14:54:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 92067 ']' 00:41:38.546 14:54:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 92067 00:41:38.546 14:54:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:41:38.546 14:54:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:41:38.546 14:54:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 92067 00:41:38.804 killing process with pid 92067 00:41:38.804 14:54:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:41:38.804 14:54:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:41:38.804 14:54:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 92067' 00:41:38.804 14:54:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 92067 00:41:38.804 14:54:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 92067 00:41:38.804 14:54:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:41:38.804 14:54:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:41:38.804 14:54:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:41:38.804 14:54:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:41:38.804 14:54:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:41:38.804 14:54:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:38.804 14:54:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:38.804 14:54:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:38.804 14:54:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:41:39.078 ************************************ 00:41:39.078 END TEST nvmf_fio_target 00:41:39.078 ************************************ 00:41:39.078 00:41:39.078 real 0m18.293s 00:41:39.078 user 1m10.105s 00:41:39.078 sys 0m7.622s 00:41:39.078 14:54:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:41:39.078 14:54:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:39.078 14:54:58 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:41:39.078 14:54:58 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:41:39.078 14:54:58 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:41:39.078 14:54:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:39.078 ************************************ 00:41:39.078 START TEST nvmf_bdevio 00:41:39.078 ************************************ 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:41:39.078 * Looking for test storage... 00:41:39.078 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:41:39.078 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:41:39.359 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:41:39.359 Cannot find device "nvmf_tgt_br" 00:41:39.359 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:41:39.359 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:41:39.359 Cannot find device "nvmf_tgt_br2" 00:41:39.359 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:41:39.359 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:41:39.359 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:41:39.359 Cannot find device "nvmf_tgt_br" 00:41:39.359 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:41:39.359 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:41:39.359 Cannot find device "nvmf_tgt_br2" 00:41:39.359 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:41:39.359 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:41:39.359 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:41:39.359 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:41:39.359 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:41:39.359 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:41:39.359 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:41:39.359 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:41:39.359 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:41:39.359 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:41:39.359 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:41:39.359 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:41:39.359 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:41:39.359 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:41:39.359 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:41:39.359 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:41:39.359 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:41:39.359 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:41:39.359 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:41:39.359 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:41:39.359 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:41:39.359 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:41:39.359 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:41:39.359 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:41:39.359 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:41:39.359 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:41:39.359 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:41:39.359 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:41:39.360 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:41:39.360 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:41:39.360 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:41:39.360 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:41:39.360 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:41:39.619 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:39.619 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:41:39.619 00:41:39.619 --- 10.0.0.2 ping statistics --- 00:41:39.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:39.619 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:41:39.619 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:41:39.619 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:41:39.619 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:41:39.619 00:41:39.619 --- 10.0.0.3 ping statistics --- 00:41:39.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:39.619 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:41:39.619 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:41:39.619 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:39.619 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:41:39.619 00:41:39.619 --- 10.0.0.1 ping statistics --- 00:41:39.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:39.619 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:41:39.619 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:39.619 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:41:39.619 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:41:39.619 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:39.619 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:41:39.619 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:41:39.619 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:39.619 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:41:39.619 14:54:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:41:39.619 14:54:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:41:39.619 14:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:41:39.619 14:54:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:41:39.619 14:54:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:39.619 14:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=92903 00:41:39.619 14:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:41:39.619 14:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 92903 00:41:39.619 14:54:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 92903 ']' 00:41:39.619 14:54:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:39.619 14:54:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:41:39.619 14:54:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:39.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:39.619 14:54:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:41:39.619 14:54:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:39.619 [2024-07-22 14:54:59.055277] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:41:39.619 [2024-07-22 14:54:59.055333] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:39.619 [2024-07-22 14:54:59.195958] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:39.619 [2024-07-22 14:54:59.246982] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:39.619 [2024-07-22 14:54:59.247033] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:39.619 [2024-07-22 14:54:59.247039] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:39.619 [2024-07-22 14:54:59.247043] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:39.619 [2024-07-22 14:54:59.247047] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:39.619 [2024-07-22 14:54:59.247252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:41:39.619 [2024-07-22 14:54:59.247494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:41:39.619 [2024-07-22 14:54:59.247694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:41:39.619 [2024-07-22 14:54:59.247710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:41:40.556 14:54:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:41:40.556 14:54:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:41:40.556 14:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:41:40.556 14:54:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:40.556 14:54:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:40.556 14:54:59 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:40.556 14:54:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:40.556 14:54:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:40.556 14:54:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:40.556 [2024-07-22 14:54:59.958924] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:40.556 14:54:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:40.556 14:54:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:40.556 14:54:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:40.556 14:54:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:40.556 Malloc0 00:41:40.556 14:54:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:40.556 14:54:59 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:40.556 14:54:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:40.556 14:54:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:40.556 14:55:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:40.556 14:55:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:40.556 14:55:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:40.556 14:55:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:40.556 14:55:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:40.556 14:55:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:40.556 14:55:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:40.556 14:55:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:40.556 [2024-07-22 14:55:00.026124] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:40.556 14:55:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:40.556 14:55:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:41:40.556 14:55:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:41:40.556 14:55:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:41:40.556 14:55:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:41:40.556 14:55:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:41:40.556 14:55:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:41:40.556 { 00:41:40.556 "params": { 00:41:40.556 "name": "Nvme$subsystem", 00:41:40.556 "trtype": "$TEST_TRANSPORT", 00:41:40.556 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:40.556 "adrfam": "ipv4", 00:41:40.556 "trsvcid": "$NVMF_PORT", 00:41:40.556 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:40.556 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:40.556 "hdgst": ${hdgst:-false}, 00:41:40.556 "ddgst": ${ddgst:-false} 00:41:40.556 }, 00:41:40.556 "method": "bdev_nvme_attach_controller" 00:41:40.556 } 00:41:40.556 EOF 00:41:40.556 )") 00:41:40.556 14:55:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:41:40.556 14:55:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:41:40.556 14:55:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:41:40.556 14:55:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:41:40.556 "params": { 00:41:40.556 "name": "Nvme1", 00:41:40.556 "trtype": "tcp", 00:41:40.556 "traddr": "10.0.0.2", 00:41:40.556 "adrfam": "ipv4", 00:41:40.556 "trsvcid": "4420", 00:41:40.556 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:40.556 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:40.556 "hdgst": false, 00:41:40.556 "ddgst": false 00:41:40.556 }, 00:41:40.556 "method": "bdev_nvme_attach_controller" 00:41:40.556 }' 00:41:40.556 [2024-07-22 14:55:00.085104] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:41:40.556 [2024-07-22 14:55:00.085163] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92965 ] 00:41:40.814 [2024-07-22 14:55:00.224976] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:41:40.814 [2024-07-22 14:55:00.276275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:41:40.814 [2024-07-22 14:55:00.276464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:41:40.814 [2024-07-22 14:55:00.276467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:41:40.814 I/O targets: 00:41:40.814 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:41:40.814 00:41:40.814 00:41:40.814 CUnit - A unit testing framework for C - Version 2.1-3 00:41:40.814 http://cunit.sourceforge.net/ 00:41:40.814 00:41:40.814 00:41:40.814 Suite: bdevio tests on: Nvme1n1 00:41:41.072 Test: blockdev write read block ...passed 00:41:41.072 Test: blockdev write zeroes read block ...passed 00:41:41.072 Test: blockdev write zeroes read no split ...passed 00:41:41.072 Test: blockdev write zeroes read split ...passed 00:41:41.072 Test: blockdev write zeroes read split partial ...passed 00:41:41.072 Test: blockdev reset ...[2024-07-22 14:55:00.546810] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:41:41.072 [2024-07-22 14:55:00.546906] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26100d0 (9): Bad file descriptor 00:41:41.072 passed 00:41:41.072 Test: blockdev write read 8 blocks ...[2024-07-22 14:55:00.563463] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:41:41.072 passed 00:41:41.072 Test: blockdev write read size > 128k ...passed 00:41:41.072 Test: blockdev write read invalid size ...passed 00:41:41.072 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:41:41.072 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:41:41.072 Test: blockdev write read max offset ...passed 00:41:41.072 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:41:41.072 Test: blockdev writev readv 8 blocks ...passed 00:41:41.072 Test: blockdev writev readv 30 x 1block ...passed 00:41:41.331 Test: blockdev writev readv block ...passed 00:41:41.331 Test: blockdev writev readv size > 128k ...passed 00:41:41.331 Test: blockdev writev readv size > 128k in two iovs ...passed 00:41:41.331 Test: blockdev comparev and writev ...[2024-07-22 14:55:00.733620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:41.331 [2024-07-22 14:55:00.733679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:41.331 [2024-07-22 14:55:00.733695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:41.331 [2024-07-22 14:55:00.733704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:41:41.331 [2024-07-22 14:55:00.733949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:41.331 [2024-07-22 14:55:00.733966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:41:41.331 [2024-07-22 14:55:00.733977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:41.331 [2024-07-22 14:55:00.733984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:41:41.331 [2024-07-22 14:55:00.734209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:41.331 [2024-07-22 14:55:00.734226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:41:41.331 [2024-07-22 14:55:00.734238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:41.331 [2024-07-22 14:55:00.734244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:41:41.331 [2024-07-22 14:55:00.734467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:41.331 [2024-07-22 14:55:00.734483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:41:41.331 [2024-07-22 14:55:00.734494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:41.331 [2024-07-22 14:55:00.734500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:41:41.331 passed 00:41:41.331 Test: blockdev nvme passthru rw ...passed 00:41:41.331 Test: blockdev nvme passthru vendor specific ...[2024-07-22 14:55:00.816947] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:41.331 [2024-07-22 14:55:00.816982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:41:41.331 [2024-07-22 14:55:00.817080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:41.331 [2024-07-22 14:55:00.817092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:41:41.331 [2024-07-22 14:55:00.817177] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:41.331 [2024-07-22 14:55:00.817185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:41:41.331 [2024-07-22 14:55:00.817275] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:41.331 [2024-07-22 14:55:00.817283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:41:41.331 passed 00:41:41.331 Test: blockdev nvme admin passthru ...passed 00:41:41.331 Test: blockdev copy ...passed 00:41:41.331 00:41:41.331 Run Summary: Type Total Ran Passed Failed Inactive 00:41:41.331 suites 1 1 n/a 0 0 00:41:41.331 tests 23 23 23 0 0 00:41:41.331 asserts 152 152 152 0 n/a 00:41:41.331 00:41:41.331 Elapsed time = 0.916 seconds 00:41:41.590 14:55:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:41.590 14:55:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:41.590 14:55:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:41.590 14:55:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:41.590 14:55:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:41:41.590 14:55:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:41:41.590 14:55:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:41:41.590 14:55:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:41:41.590 14:55:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:41:41.590 14:55:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:41:41.590 14:55:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:41:41.590 14:55:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:41:41.590 rmmod nvme_tcp 00:41:41.590 rmmod nvme_fabrics 00:41:41.590 rmmod nvme_keyring 00:41:41.590 14:55:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:41:41.590 14:55:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:41:41.590 14:55:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:41:41.590 14:55:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 92903 ']' 00:41:41.590 14:55:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 92903 00:41:41.590 14:55:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 92903 ']' 00:41:41.590 14:55:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 92903 00:41:41.590 14:55:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:41:41.590 14:55:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:41:41.590 14:55:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 92903 00:41:41.590 14:55:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:41:41.590 14:55:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:41:41.590 killing process with pid 92903 00:41:41.590 14:55:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 92903' 00:41:41.590 14:55:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 92903 00:41:41.590 14:55:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 92903 00:41:41.849 14:55:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:41:41.849 14:55:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:41:41.849 14:55:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:41:41.849 14:55:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:41:41.849 14:55:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:41:41.849 14:55:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:41.849 14:55:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:41.849 14:55:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:41.849 14:55:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:41:41.849 00:41:41.849 real 0m2.957s 00:41:41.849 user 0m10.524s 00:41:41.849 sys 0m0.758s 00:41:41.849 14:55:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:41:41.849 14:55:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:41.849 ************************************ 00:41:41.849 END TEST nvmf_bdevio 00:41:41.849 ************************************ 00:41:42.108 14:55:01 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:41:42.108 14:55:01 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:41:42.108 14:55:01 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:41:42.108 14:55:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:42.108 ************************************ 00:41:42.108 START TEST nvmf_auth_target 00:41:42.108 ************************************ 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:41:42.108 * Looking for test storage... 00:41:42.108 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:41:42.108 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:41:42.366 Cannot find device "nvmf_tgt_br" 00:41:42.366 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:41:42.366 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:41:42.366 Cannot find device "nvmf_tgt_br2" 00:41:42.366 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:41:42.366 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:41:42.366 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:41:42.366 Cannot find device "nvmf_tgt_br" 00:41:42.366 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:41:42.366 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:41:42.366 Cannot find device "nvmf_tgt_br2" 00:41:42.366 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:41:42.366 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:41:42.366 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:41:42.366 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:41:42.366 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:41:42.366 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:41:42.366 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:41:42.366 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:41:42.366 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:41:42.366 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:41:42.366 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:41:42.366 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:41:42.366 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:41:42.366 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:41:42.366 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:41:42.366 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:41:42.366 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:41:42.366 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:41:42.366 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:41:42.366 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:41:42.366 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:41:42.366 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:41:42.366 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:41:42.366 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:41:42.366 14:55:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:41:42.625 14:55:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:41:42.625 14:55:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:41:42.625 14:55:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:41:42.625 14:55:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:41:42.625 14:55:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:41:42.625 14:55:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:41:42.625 14:55:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:41:42.626 14:55:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:41:42.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:42.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:41:42.626 00:41:42.626 --- 10.0.0.2 ping statistics --- 00:41:42.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:42.626 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:41:42.626 14:55:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:41:42.626 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:41:42.626 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:41:42.626 00:41:42.626 --- 10.0.0.3 ping statistics --- 00:41:42.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:42.626 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:41:42.626 14:55:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:41:42.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:42.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:41:42.626 00:41:42.626 --- 10.0.0.1 ping statistics --- 00:41:42.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:42.626 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:41:42.626 14:55:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:42.626 14:55:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:41:42.626 14:55:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:41:42.626 14:55:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:42.626 14:55:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:41:42.626 14:55:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:41:42.626 14:55:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:42.626 14:55:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:41:42.626 14:55:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:41:42.626 14:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:41:42.626 14:55:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:41:42.626 14:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:41:42.626 14:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:41:42.626 14:55:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=93145 00:41:42.626 14:55:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 93145 00:41:42.626 14:55:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:41:42.626 14:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 93145 ']' 00:41:42.626 14:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:42.626 14:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:41:42.626 14:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:42.626 14:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:41:42.626 14:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:41:43.561 14:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:41:43.561 14:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:41:43.561 14:55:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:41:43.561 14:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:43.561 14:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:41:43.561 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:43.561 14:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=93189 00:41:43.561 14:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:41:43.561 14:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:41:43.561 14:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:41:43.561 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:41:43.561 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:41:43.561 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:41:43.561 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:41:43.561 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:41:43.561 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:41:43.561 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=971c53dfd4d2addbef512458017fe27fd086089a8db0375f 00:41:43.561 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:41:43.561 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.HTy 00:41:43.561 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 971c53dfd4d2addbef512458017fe27fd086089a8db0375f 0 00:41:43.561 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 971c53dfd4d2addbef512458017fe27fd086089a8db0375f 0 00:41:43.561 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:41:43.561 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:41:43.562 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=971c53dfd4d2addbef512458017fe27fd086089a8db0375f 00:41:43.562 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:41:43.562 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:41:43.562 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.HTy 00:41:43.562 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.HTy 00:41:43.562 14:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.HTy 00:41:43.562 14:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:41:43.562 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:41:43.562 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:41:43.562 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:41:43.562 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:41:43.562 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:41:43.562 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:41:43.562 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=fc2a93a46613dba027861a1ac38ab6050fe38dc58e5f712a922cc6cf2714d977 00:41:43.562 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:41:43.562 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.WlG 00:41:43.562 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key fc2a93a46613dba027861a1ac38ab6050fe38dc58e5f712a922cc6cf2714d977 3 00:41:43.562 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 fc2a93a46613dba027861a1ac38ab6050fe38dc58e5f712a922cc6cf2714d977 3 00:41:43.562 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:41:43.562 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:41:43.562 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=fc2a93a46613dba027861a1ac38ab6050fe38dc58e5f712a922cc6cf2714d977 00:41:43.562 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:41:43.562 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:41:43.562 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.WlG 00:41:43.562 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.WlG 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.WlG 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e3cb5448509755bd31c2a921041a5e24 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.X2H 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e3cb5448509755bd31c2a921041a5e24 1 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e3cb5448509755bd31c2a921041a5e24 1 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e3cb5448509755bd31c2a921041a5e24 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.X2H 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.X2H 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.X2H 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ac3f76154700ef02ef2fa34abc448be710ec56f2d56b1359 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.vUC 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ac3f76154700ef02ef2fa34abc448be710ec56f2d56b1359 2 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ac3f76154700ef02ef2fa34abc448be710ec56f2d56b1359 2 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ac3f76154700ef02ef2fa34abc448be710ec56f2d56b1359 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.vUC 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.vUC 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.vUC 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=648c084d8b7e11c7d5323680f5efc038fc1046ea352ae0ca 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.swU 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 648c084d8b7e11c7d5323680f5efc038fc1046ea352ae0ca 2 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 648c084d8b7e11c7d5323680f5efc038fc1046ea352ae0ca 2 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=648c084d8b7e11c7d5323680f5efc038fc1046ea352ae0ca 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.swU 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.swU 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.swU 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4e9126d044bf0d8a9ab5ba795a030fb4 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.mUL 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4e9126d044bf0d8a9ab5ba795a030fb4 1 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4e9126d044bf0d8a9ab5ba795a030fb4 1 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4e9126d044bf0d8a9ab5ba795a030fb4 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:41:43.821 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:41:44.080 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.mUL 00:41:44.080 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.mUL 00:41:44.080 14:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.mUL 00:41:44.081 14:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:41:44.081 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:41:44.081 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:41:44.081 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:41:44.081 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:41:44.081 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:41:44.081 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:41:44.081 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=07706ed7e72aed23dd826c586384bf3c29c37eb0ce0431523741420bcce01ba0 00:41:44.081 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:41:44.081 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.urs 00:41:44.081 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 07706ed7e72aed23dd826c586384bf3c29c37eb0ce0431523741420bcce01ba0 3 00:41:44.081 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 07706ed7e72aed23dd826c586384bf3c29c37eb0ce0431523741420bcce01ba0 3 00:41:44.081 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:41:44.081 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:41:44.081 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=07706ed7e72aed23dd826c586384bf3c29c37eb0ce0431523741420bcce01ba0 00:41:44.081 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:41:44.081 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:41:44.081 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.urs 00:41:44.081 14:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.urs 00:41:44.081 14:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.urs 00:41:44.081 14:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:41:44.081 14:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 93145 00:41:44.081 14:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 93145 ']' 00:41:44.081 14:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:44.081 14:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:41:44.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:44.081 14:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:44.081 14:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:41:44.081 14:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:41:44.340 14:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:41:44.340 14:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:41:44.340 14:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 93189 /var/tmp/host.sock 00:41:44.340 14:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 93189 ']' 00:41:44.340 14:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/host.sock 00:41:44.340 14:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:41:44.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:41:44.340 14:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:41:44.340 14:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:41:44.340 14:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:41:44.598 14:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:41:44.598 14:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:41:44.598 14:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:41:44.598 14:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:44.598 14:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:41:44.598 14:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:44.598 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:41:44.598 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.HTy 00:41:44.598 14:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:44.598 14:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:41:44.598 14:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:44.598 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.HTy 00:41:44.598 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.HTy 00:41:44.598 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.WlG ]] 00:41:44.598 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.WlG 00:41:44.598 14:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:44.598 14:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:41:44.857 14:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:44.857 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.WlG 00:41:44.857 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.WlG 00:41:44.857 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:41:44.857 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.X2H 00:41:44.857 14:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:44.857 14:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:41:44.857 14:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:44.857 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.X2H 00:41:44.857 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.X2H 00:41:45.117 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.vUC ]] 00:41:45.117 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.vUC 00:41:45.117 14:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:45.117 14:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:41:45.117 14:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:45.117 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.vUC 00:41:45.117 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.vUC 00:41:45.376 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:41:45.376 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.swU 00:41:45.376 14:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:45.376 14:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:41:45.376 14:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:45.376 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.swU 00:41:45.376 14:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.swU 00:41:45.634 14:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.mUL ]] 00:41:45.634 14:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.mUL 00:41:45.634 14:55:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:45.634 14:55:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:41:45.634 14:55:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:45.634 14:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.mUL 00:41:45.634 14:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.mUL 00:41:45.932 14:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:41:45.932 14:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.urs 00:41:45.932 14:55:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:45.932 14:55:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:41:45.932 14:55:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:45.932 14:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.urs 00:41:45.932 14:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.urs 00:41:45.932 14:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:41:45.932 14:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:41:45.932 14:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:41:45.932 14:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:41:45.932 14:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:41:45.932 14:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:41:46.197 14:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:41:46.197 14:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:41:46.197 14:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:41:46.197 14:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:41:46.197 14:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:41:46.197 14:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:41:46.197 14:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:46.197 14:55:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:46.197 14:55:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:41:46.197 14:55:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:46.197 14:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:46.197 14:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:46.456 00:41:46.456 14:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:41:46.456 14:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:41:46.456 14:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:41:46.715 14:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:46.715 14:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:41:46.715 14:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:46.715 14:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:41:46.715 14:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:46.715 14:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:41:46.715 { 00:41:46.715 "auth": { 00:41:46.715 "dhgroup": "null", 00:41:46.715 "digest": "sha256", 00:41:46.715 "state": "completed" 00:41:46.715 }, 00:41:46.715 "cntlid": 1, 00:41:46.715 "listen_address": { 00:41:46.715 "adrfam": "IPv4", 00:41:46.715 "traddr": "10.0.0.2", 00:41:46.715 "trsvcid": "4420", 00:41:46.715 "trtype": "TCP" 00:41:46.715 }, 00:41:46.715 "peer_address": { 00:41:46.715 "adrfam": "IPv4", 00:41:46.715 "traddr": "10.0.0.1", 00:41:46.715 "trsvcid": "38090", 00:41:46.715 "trtype": "TCP" 00:41:46.715 }, 00:41:46.715 "qid": 0, 00:41:46.715 "state": "enabled" 00:41:46.715 } 00:41:46.715 ]' 00:41:46.715 14:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:41:46.715 14:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:41:46.715 14:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:41:46.715 14:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:41:46.715 14:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:41:46.974 14:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:41:46.974 14:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:41:46.974 14:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:41:46.974 14:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:00:OTcxYzUzZGZkNGQyYWRkYmVmNTEyNDU4MDE3ZmUyN2ZkMDg2MDg5YThkYjAzNzVmDZK9aA==: --dhchap-ctrl-secret DHHC-1:03:ZmMyYTkzYTQ2NjEzZGJhMDI3ODYxYTFhYzM4YWI2MDUwZmUzOGRjNThlNWY3MTJhOTIyY2M2Y2YyNzE0ZDk3N2PNUpc=: 00:41:51.166 14:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:41:51.166 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:41:51.166 14:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:41:51.166 14:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:51.166 14:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:41:51.166 14:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:51.166 14:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:41:51.166 14:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:41:51.166 14:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:41:51.166 14:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:41:51.166 14:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:41:51.166 14:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:41:51.166 14:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:41:51.166 14:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:41:51.166 14:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:41:51.166 14:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:51.166 14:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:51.166 14:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:41:51.166 14:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:51.166 14:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:51.166 14:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:51.166 00:41:51.166 14:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:41:51.166 14:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:41:51.166 14:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:41:51.166 14:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:51.166 14:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:41:51.166 14:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:51.166 14:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:41:51.166 14:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:51.166 14:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:41:51.166 { 00:41:51.166 "auth": { 00:41:51.166 "dhgroup": "null", 00:41:51.166 "digest": "sha256", 00:41:51.166 "state": "completed" 00:41:51.166 }, 00:41:51.166 "cntlid": 3, 00:41:51.166 "listen_address": { 00:41:51.166 "adrfam": "IPv4", 00:41:51.166 "traddr": "10.0.0.2", 00:41:51.166 "trsvcid": "4420", 00:41:51.166 "trtype": "TCP" 00:41:51.166 }, 00:41:51.166 "peer_address": { 00:41:51.166 "adrfam": "IPv4", 00:41:51.166 "traddr": "10.0.0.1", 00:41:51.166 "trsvcid": "38132", 00:41:51.166 "trtype": "TCP" 00:41:51.166 }, 00:41:51.166 "qid": 0, 00:41:51.166 "state": "enabled" 00:41:51.166 } 00:41:51.166 ]' 00:41:51.166 14:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:41:51.166 14:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:41:51.166 14:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:41:51.425 14:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:41:51.425 14:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:41:51.425 14:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:41:51.425 14:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:41:51.425 14:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:41:51.425 14:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:01:ZTNjYjU0NDg1MDk3NTViZDMxYzJhOTIxMDQxYTVlMjSqZTCF: --dhchap-ctrl-secret DHHC-1:02:YWMzZjc2MTU0NzAwZWYwMmVmMmZhMzRhYmM0NDhiZTcxMGVjNTZmMmQ1NmIxMzU53N4GZA==: 00:41:52.362 14:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:41:52.362 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:41:52.362 14:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:41:52.362 14:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:52.362 14:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:41:52.362 14:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:52.362 14:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:41:52.362 14:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:41:52.362 14:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:41:52.362 14:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:41:52.362 14:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:41:52.362 14:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:41:52.362 14:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:41:52.362 14:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:41:52.362 14:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:41:52.362 14:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:52.362 14:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:52.362 14:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:41:52.362 14:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:52.362 14:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:52.362 14:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:52.621 00:41:52.621 14:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:41:52.622 14:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:41:52.622 14:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:41:52.881 14:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:52.881 14:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:41:52.881 14:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:52.881 14:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:41:52.881 14:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:52.881 14:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:41:52.881 { 00:41:52.881 "auth": { 00:41:52.881 "dhgroup": "null", 00:41:52.881 "digest": "sha256", 00:41:52.881 "state": "completed" 00:41:52.881 }, 00:41:52.881 "cntlid": 5, 00:41:52.881 "listen_address": { 00:41:52.881 "adrfam": "IPv4", 00:41:52.881 "traddr": "10.0.0.2", 00:41:52.881 "trsvcid": "4420", 00:41:52.881 "trtype": "TCP" 00:41:52.881 }, 00:41:52.881 "peer_address": { 00:41:52.881 "adrfam": "IPv4", 00:41:52.881 "traddr": "10.0.0.1", 00:41:52.881 "trsvcid": "45016", 00:41:52.881 "trtype": "TCP" 00:41:52.881 }, 00:41:52.881 "qid": 0, 00:41:52.881 "state": "enabled" 00:41:52.881 } 00:41:52.881 ]' 00:41:52.881 14:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:41:52.881 14:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:41:52.881 14:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:41:52.881 14:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:41:52.881 14:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:41:53.140 14:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:41:53.140 14:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:41:53.140 14:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:41:53.140 14:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:02:NjQ4YzA4NGQ4YjdlMTFjN2Q1MzIzNjgwZjVlZmMwMzhmYzEwNDZlYTM1MmFlMGNhRA/YTw==: --dhchap-ctrl-secret DHHC-1:01:NGU5MTI2ZDA0NGJmMGQ4YTlhYjViYTc5NWEwMzBmYjT3AH/i: 00:41:53.707 14:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:41:53.707 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:41:53.707 14:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:41:53.707 14:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:53.707 14:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:41:53.707 14:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:53.707 14:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:41:53.707 14:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:41:53.707 14:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:41:53.966 14:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:41:53.966 14:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:41:53.966 14:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:41:53.966 14:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:41:53.966 14:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:41:53.966 14:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:41:53.966 14:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key3 00:41:53.966 14:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:53.966 14:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:41:53.966 14:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:53.966 14:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:41:53.966 14:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:41:54.225 00:41:54.225 14:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:41:54.225 14:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:41:54.225 14:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:41:54.483 14:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:54.483 14:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:41:54.483 14:55:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:54.483 14:55:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:41:54.483 14:55:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:54.483 14:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:41:54.483 { 00:41:54.483 "auth": { 00:41:54.483 "dhgroup": "null", 00:41:54.483 "digest": "sha256", 00:41:54.483 "state": "completed" 00:41:54.483 }, 00:41:54.483 "cntlid": 7, 00:41:54.483 "listen_address": { 00:41:54.483 "adrfam": "IPv4", 00:41:54.483 "traddr": "10.0.0.2", 00:41:54.483 "trsvcid": "4420", 00:41:54.483 "trtype": "TCP" 00:41:54.483 }, 00:41:54.483 "peer_address": { 00:41:54.483 "adrfam": "IPv4", 00:41:54.483 "traddr": "10.0.0.1", 00:41:54.483 "trsvcid": "45036", 00:41:54.483 "trtype": "TCP" 00:41:54.483 }, 00:41:54.483 "qid": 0, 00:41:54.483 "state": "enabled" 00:41:54.483 } 00:41:54.483 ]' 00:41:54.483 14:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:41:54.483 14:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:41:54.483 14:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:41:54.742 14:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:41:54.742 14:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:41:54.742 14:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:41:54.742 14:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:41:54.742 14:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:41:55.000 14:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:03:MDc3MDZlZDdlNzJhZWQyM2RkODI2YzU4NjM4NGJmM2MyOWMzN2ViMGNlMDQzMTUyMzc0MTQyMGJjY2UwMWJhMPYII3I=: 00:41:55.566 14:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:41:55.566 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:41:55.566 14:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:41:55.566 14:55:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:55.566 14:55:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:41:55.566 14:55:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:55.566 14:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:41:55.566 14:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:41:55.566 14:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:41:55.566 14:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:41:55.566 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:41:55.566 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:41:55.566 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:41:55.566 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:41:55.566 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:41:55.566 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:41:55.567 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:55.567 14:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:55.567 14:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:41:55.567 14:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:55.567 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:55.567 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:41:55.826 00:41:55.826 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:41:55.826 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:41:55.826 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:41:56.084 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:56.084 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:41:56.084 14:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:56.084 14:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:41:56.084 14:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:56.084 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:41:56.084 { 00:41:56.084 "auth": { 00:41:56.084 "dhgroup": "ffdhe2048", 00:41:56.084 "digest": "sha256", 00:41:56.084 "state": "completed" 00:41:56.084 }, 00:41:56.084 "cntlid": 9, 00:41:56.084 "listen_address": { 00:41:56.084 "adrfam": "IPv4", 00:41:56.084 "traddr": "10.0.0.2", 00:41:56.084 "trsvcid": "4420", 00:41:56.084 "trtype": "TCP" 00:41:56.084 }, 00:41:56.084 "peer_address": { 00:41:56.084 "adrfam": "IPv4", 00:41:56.084 "traddr": "10.0.0.1", 00:41:56.084 "trsvcid": "45066", 00:41:56.084 "trtype": "TCP" 00:41:56.084 }, 00:41:56.084 "qid": 0, 00:41:56.084 "state": "enabled" 00:41:56.084 } 00:41:56.084 ]' 00:41:56.084 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:41:56.084 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:41:56.084 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:41:56.342 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:41:56.342 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:41:56.342 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:41:56.342 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:41:56.342 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:41:56.601 14:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:00:OTcxYzUzZGZkNGQyYWRkYmVmNTEyNDU4MDE3ZmUyN2ZkMDg2MDg5YThkYjAzNzVmDZK9aA==: --dhchap-ctrl-secret DHHC-1:03:ZmMyYTkzYTQ2NjEzZGJhMDI3ODYxYTFhYzM4YWI2MDUwZmUzOGRjNThlNWY3MTJhOTIyY2M2Y2YyNzE0ZDk3N2PNUpc=: 00:41:57.168 14:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:41:57.168 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:41:57.168 14:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:41:57.168 14:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:57.168 14:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:41:57.168 14:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:57.168 14:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:41:57.168 14:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:41:57.168 14:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:41:57.168 14:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:41:57.169 14:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:41:57.169 14:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:41:57.169 14:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:41:57.169 14:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:41:57.169 14:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:41:57.169 14:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:57.427 14:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:57.427 14:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:41:57.427 14:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:57.427 14:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:57.427 14:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:41:57.686 00:41:57.686 14:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:41:57.686 14:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:41:57.686 14:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:41:57.686 14:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:57.686 14:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:41:57.686 14:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:57.686 14:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:41:57.686 14:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:57.686 14:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:41:57.686 { 00:41:57.686 "auth": { 00:41:57.686 "dhgroup": "ffdhe2048", 00:41:57.686 "digest": "sha256", 00:41:57.686 "state": "completed" 00:41:57.686 }, 00:41:57.686 "cntlid": 11, 00:41:57.686 "listen_address": { 00:41:57.686 "adrfam": "IPv4", 00:41:57.686 "traddr": "10.0.0.2", 00:41:57.686 "trsvcid": "4420", 00:41:57.686 "trtype": "TCP" 00:41:57.686 }, 00:41:57.686 "peer_address": { 00:41:57.686 "adrfam": "IPv4", 00:41:57.686 "traddr": "10.0.0.1", 00:41:57.686 "trsvcid": "45086", 00:41:57.686 "trtype": "TCP" 00:41:57.686 }, 00:41:57.686 "qid": 0, 00:41:57.686 "state": "enabled" 00:41:57.686 } 00:41:57.686 ]' 00:41:57.686 14:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:41:57.945 14:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:41:57.945 14:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:41:57.945 14:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:41:57.945 14:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:41:57.945 14:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:41:57.945 14:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:41:57.945 14:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:41:58.203 14:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:01:ZTNjYjU0NDg1MDk3NTViZDMxYzJhOTIxMDQxYTVlMjSqZTCF: --dhchap-ctrl-secret DHHC-1:02:YWMzZjc2MTU0NzAwZWYwMmVmMmZhMzRhYmM0NDhiZTcxMGVjNTZmMmQ1NmIxMzU53N4GZA==: 00:41:58.771 14:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:41:58.771 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:41:58.771 14:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:41:58.771 14:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:58.771 14:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:41:58.771 14:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:58.771 14:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:41:58.771 14:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:41:58.771 14:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:41:58.771 14:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:41:58.771 14:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:41:58.771 14:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:41:58.771 14:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:41:58.771 14:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:41:58.772 14:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:41:58.772 14:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:58.772 14:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:58.772 14:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:41:58.772 14:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:58.772 14:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:58.772 14:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:41:59.030 00:41:59.290 14:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:41:59.290 14:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:41:59.290 14:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:41:59.290 14:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:41:59.290 14:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:41:59.290 14:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:59.290 14:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:41:59.290 14:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:59.290 14:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:41:59.290 { 00:41:59.290 "auth": { 00:41:59.290 "dhgroup": "ffdhe2048", 00:41:59.290 "digest": "sha256", 00:41:59.290 "state": "completed" 00:41:59.290 }, 00:41:59.290 "cntlid": 13, 00:41:59.290 "listen_address": { 00:41:59.290 "adrfam": "IPv4", 00:41:59.290 "traddr": "10.0.0.2", 00:41:59.290 "trsvcid": "4420", 00:41:59.290 "trtype": "TCP" 00:41:59.290 }, 00:41:59.290 "peer_address": { 00:41:59.290 "adrfam": "IPv4", 00:41:59.290 "traddr": "10.0.0.1", 00:41:59.290 "trsvcid": "45104", 00:41:59.290 "trtype": "TCP" 00:41:59.290 }, 00:41:59.290 "qid": 0, 00:41:59.290 "state": "enabled" 00:41:59.290 } 00:41:59.290 ]' 00:41:59.290 14:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:41:59.550 14:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:41:59.550 14:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:41:59.550 14:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:41:59.550 14:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:41:59.550 14:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:41:59.550 14:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:41:59.550 14:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:41:59.808 14:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:02:NjQ4YzA4NGQ4YjdlMTFjN2Q1MzIzNjgwZjVlZmMwMzhmYzEwNDZlYTM1MmFlMGNhRA/YTw==: --dhchap-ctrl-secret DHHC-1:01:NGU5MTI2ZDA0NGJmMGQ4YTlhYjViYTc5NWEwMzBmYjT3AH/i: 00:42:00.377 14:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:00.377 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:00.377 14:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:42:00.377 14:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:00.377 14:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:00.377 14:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:00.377 14:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:42:00.377 14:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:42:00.377 14:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:42:00.637 14:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:42:00.637 14:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:42:00.637 14:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:42:00.637 14:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:42:00.637 14:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:42:00.637 14:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:00.637 14:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key3 00:42:00.637 14:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:00.637 14:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:00.637 14:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:00.637 14:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:42:00.637 14:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:42:00.896 00:42:00.896 14:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:42:00.896 14:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:00.896 14:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:42:00.896 14:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:00.896 14:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:00.896 14:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:00.896 14:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:01.155 14:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:01.156 14:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:42:01.156 { 00:42:01.156 "auth": { 00:42:01.156 "dhgroup": "ffdhe2048", 00:42:01.156 "digest": "sha256", 00:42:01.156 "state": "completed" 00:42:01.156 }, 00:42:01.156 "cntlid": 15, 00:42:01.156 "listen_address": { 00:42:01.156 "adrfam": "IPv4", 00:42:01.156 "traddr": "10.0.0.2", 00:42:01.156 "trsvcid": "4420", 00:42:01.156 "trtype": "TCP" 00:42:01.156 }, 00:42:01.156 "peer_address": { 00:42:01.156 "adrfam": "IPv4", 00:42:01.156 "traddr": "10.0.0.1", 00:42:01.156 "trsvcid": "45138", 00:42:01.156 "trtype": "TCP" 00:42:01.156 }, 00:42:01.156 "qid": 0, 00:42:01.156 "state": "enabled" 00:42:01.156 } 00:42:01.156 ]' 00:42:01.156 14:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:42:01.156 14:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:01.156 14:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:42:01.156 14:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:42:01.156 14:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:42:01.156 14:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:01.156 14:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:01.156 14:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:01.415 14:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:03:MDc3MDZlZDdlNzJhZWQyM2RkODI2YzU4NjM4NGJmM2MyOWMzN2ViMGNlMDQzMTUyMzc0MTQyMGJjY2UwMWJhMPYII3I=: 00:42:01.984 14:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:01.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:01.984 14:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:42:01.984 14:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:01.984 14:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:01.984 14:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:01.984 14:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:42:01.984 14:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:42:01.984 14:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:42:01.984 14:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:42:02.244 14:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:42:02.244 14:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:42:02.244 14:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:42:02.244 14:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:42:02.244 14:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:42:02.244 14:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:02.244 14:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:02.244 14:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:02.244 14:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:02.244 14:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:02.244 14:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:02.244 14:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:02.503 00:42:02.503 14:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:42:02.503 14:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:42:02.503 14:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:02.762 14:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:02.762 14:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:02.762 14:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:02.762 14:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:02.762 14:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:02.762 14:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:42:02.762 { 00:42:02.762 "auth": { 00:42:02.762 "dhgroup": "ffdhe3072", 00:42:02.762 "digest": "sha256", 00:42:02.762 "state": "completed" 00:42:02.762 }, 00:42:02.762 "cntlid": 17, 00:42:02.762 "listen_address": { 00:42:02.762 "adrfam": "IPv4", 00:42:02.762 "traddr": "10.0.0.2", 00:42:02.762 "trsvcid": "4420", 00:42:02.762 "trtype": "TCP" 00:42:02.762 }, 00:42:02.762 "peer_address": { 00:42:02.762 "adrfam": "IPv4", 00:42:02.762 "traddr": "10.0.0.1", 00:42:02.762 "trsvcid": "42156", 00:42:02.762 "trtype": "TCP" 00:42:02.762 }, 00:42:02.762 "qid": 0, 00:42:02.762 "state": "enabled" 00:42:02.762 } 00:42:02.762 ]' 00:42:02.762 14:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:42:02.762 14:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:02.762 14:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:42:02.762 14:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:42:02.762 14:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:42:02.762 14:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:02.762 14:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:02.762 14:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:03.022 14:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:00:OTcxYzUzZGZkNGQyYWRkYmVmNTEyNDU4MDE3ZmUyN2ZkMDg2MDg5YThkYjAzNzVmDZK9aA==: --dhchap-ctrl-secret DHHC-1:03:ZmMyYTkzYTQ2NjEzZGJhMDI3ODYxYTFhYzM4YWI2MDUwZmUzOGRjNThlNWY3MTJhOTIyY2M2Y2YyNzE0ZDk3N2PNUpc=: 00:42:03.589 14:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:03.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:03.589 14:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:42:03.589 14:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:03.589 14:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:03.589 14:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:03.589 14:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:42:03.589 14:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:42:03.589 14:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:42:03.848 14:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:42:03.848 14:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:42:03.848 14:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:42:03.848 14:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:42:03.848 14:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:42:03.848 14:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:03.848 14:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:03.848 14:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:03.848 14:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:03.848 14:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:03.848 14:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:03.848 14:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:04.107 00:42:04.107 14:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:42:04.107 14:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:42:04.107 14:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:04.367 14:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:04.367 14:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:04.367 14:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:04.367 14:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:04.367 14:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:04.367 14:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:42:04.367 { 00:42:04.367 "auth": { 00:42:04.367 "dhgroup": "ffdhe3072", 00:42:04.367 "digest": "sha256", 00:42:04.367 "state": "completed" 00:42:04.367 }, 00:42:04.367 "cntlid": 19, 00:42:04.367 "listen_address": { 00:42:04.367 "adrfam": "IPv4", 00:42:04.367 "traddr": "10.0.0.2", 00:42:04.367 "trsvcid": "4420", 00:42:04.367 "trtype": "TCP" 00:42:04.367 }, 00:42:04.367 "peer_address": { 00:42:04.367 "adrfam": "IPv4", 00:42:04.367 "traddr": "10.0.0.1", 00:42:04.367 "trsvcid": "42174", 00:42:04.367 "trtype": "TCP" 00:42:04.367 }, 00:42:04.367 "qid": 0, 00:42:04.367 "state": "enabled" 00:42:04.367 } 00:42:04.367 ]' 00:42:04.367 14:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:42:04.367 14:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:04.367 14:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:42:04.367 14:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:42:04.367 14:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:42:04.367 14:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:04.367 14:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:04.367 14:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:04.626 14:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:01:ZTNjYjU0NDg1MDk3NTViZDMxYzJhOTIxMDQxYTVlMjSqZTCF: --dhchap-ctrl-secret DHHC-1:02:YWMzZjc2MTU0NzAwZWYwMmVmMmZhMzRhYmM0NDhiZTcxMGVjNTZmMmQ1NmIxMzU53N4GZA==: 00:42:05.196 14:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:05.196 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:05.196 14:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:42:05.196 14:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:05.196 14:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:05.196 14:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:05.196 14:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:42:05.196 14:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:42:05.196 14:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:42:05.455 14:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:42:05.455 14:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:42:05.455 14:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:42:05.455 14:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:42:05.455 14:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:42:05.455 14:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:05.455 14:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:05.455 14:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:05.455 14:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:05.455 14:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:05.455 14:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:05.455 14:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:05.715 00:42:05.715 14:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:42:05.715 14:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:42:05.715 14:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:05.975 14:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:05.975 14:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:05.975 14:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:05.975 14:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:05.975 14:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:05.975 14:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:42:05.975 { 00:42:05.975 "auth": { 00:42:05.975 "dhgroup": "ffdhe3072", 00:42:05.975 "digest": "sha256", 00:42:05.975 "state": "completed" 00:42:05.975 }, 00:42:05.975 "cntlid": 21, 00:42:05.975 "listen_address": { 00:42:05.975 "adrfam": "IPv4", 00:42:05.975 "traddr": "10.0.0.2", 00:42:05.975 "trsvcid": "4420", 00:42:05.975 "trtype": "TCP" 00:42:05.975 }, 00:42:05.975 "peer_address": { 00:42:05.975 "adrfam": "IPv4", 00:42:05.975 "traddr": "10.0.0.1", 00:42:05.975 "trsvcid": "42194", 00:42:05.975 "trtype": "TCP" 00:42:05.975 }, 00:42:05.975 "qid": 0, 00:42:05.975 "state": "enabled" 00:42:05.975 } 00:42:05.975 ]' 00:42:05.975 14:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:42:05.975 14:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:05.975 14:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:42:05.975 14:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:42:05.975 14:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:42:05.975 14:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:05.975 14:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:05.975 14:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:06.234 14:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:02:NjQ4YzA4NGQ4YjdlMTFjN2Q1MzIzNjgwZjVlZmMwMzhmYzEwNDZlYTM1MmFlMGNhRA/YTw==: --dhchap-ctrl-secret DHHC-1:01:NGU5MTI2ZDA0NGJmMGQ4YTlhYjViYTc5NWEwMzBmYjT3AH/i: 00:42:06.803 14:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:06.803 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:06.803 14:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:42:06.803 14:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:06.803 14:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:06.803 14:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:06.803 14:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:42:06.803 14:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:42:06.803 14:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:42:07.063 14:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:42:07.063 14:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:42:07.063 14:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:42:07.063 14:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:42:07.063 14:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:42:07.063 14:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:07.063 14:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key3 00:42:07.063 14:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:07.063 14:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:07.063 14:55:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:07.063 14:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:42:07.063 14:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:42:07.322 00:42:07.323 14:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:42:07.323 14:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:42:07.323 14:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:07.582 14:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:07.582 14:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:07.582 14:55:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:07.582 14:55:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:07.582 14:55:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:07.582 14:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:42:07.582 { 00:42:07.582 "auth": { 00:42:07.582 "dhgroup": "ffdhe3072", 00:42:07.582 "digest": "sha256", 00:42:07.582 "state": "completed" 00:42:07.582 }, 00:42:07.582 "cntlid": 23, 00:42:07.582 "listen_address": { 00:42:07.582 "adrfam": "IPv4", 00:42:07.582 "traddr": "10.0.0.2", 00:42:07.582 "trsvcid": "4420", 00:42:07.582 "trtype": "TCP" 00:42:07.582 }, 00:42:07.582 "peer_address": { 00:42:07.582 "adrfam": "IPv4", 00:42:07.582 "traddr": "10.0.0.1", 00:42:07.582 "trsvcid": "42232", 00:42:07.582 "trtype": "TCP" 00:42:07.582 }, 00:42:07.582 "qid": 0, 00:42:07.582 "state": "enabled" 00:42:07.582 } 00:42:07.582 ]' 00:42:07.582 14:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:42:07.582 14:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:07.582 14:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:42:07.582 14:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:42:07.582 14:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:42:07.842 14:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:07.842 14:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:07.842 14:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:07.842 14:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:03:MDc3MDZlZDdlNzJhZWQyM2RkODI2YzU4NjM4NGJmM2MyOWMzN2ViMGNlMDQzMTUyMzc0MTQyMGJjY2UwMWJhMPYII3I=: 00:42:08.410 14:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:08.410 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:08.410 14:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:42:08.411 14:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:08.411 14:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:08.411 14:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:08.411 14:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:42:08.411 14:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:42:08.411 14:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:42:08.411 14:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:42:08.671 14:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:42:08.671 14:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:42:08.671 14:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:42:08.671 14:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:42:08.671 14:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:42:08.671 14:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:08.671 14:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:08.671 14:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:08.671 14:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:08.671 14:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:08.671 14:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:08.671 14:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:08.930 00:42:09.190 14:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:42:09.190 14:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:09.190 14:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:42:09.190 14:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:09.190 14:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:09.190 14:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:09.190 14:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:09.190 14:55:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:09.190 14:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:42:09.190 { 00:42:09.190 "auth": { 00:42:09.190 "dhgroup": "ffdhe4096", 00:42:09.190 "digest": "sha256", 00:42:09.190 "state": "completed" 00:42:09.190 }, 00:42:09.190 "cntlid": 25, 00:42:09.190 "listen_address": { 00:42:09.190 "adrfam": "IPv4", 00:42:09.190 "traddr": "10.0.0.2", 00:42:09.190 "trsvcid": "4420", 00:42:09.190 "trtype": "TCP" 00:42:09.190 }, 00:42:09.190 "peer_address": { 00:42:09.190 "adrfam": "IPv4", 00:42:09.190 "traddr": "10.0.0.1", 00:42:09.190 "trsvcid": "42266", 00:42:09.190 "trtype": "TCP" 00:42:09.190 }, 00:42:09.190 "qid": 0, 00:42:09.190 "state": "enabled" 00:42:09.190 } 00:42:09.190 ]' 00:42:09.190 14:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:42:09.450 14:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:09.450 14:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:42:09.450 14:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:42:09.450 14:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:42:09.450 14:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:09.450 14:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:09.450 14:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:09.709 14:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:00:OTcxYzUzZGZkNGQyYWRkYmVmNTEyNDU4MDE3ZmUyN2ZkMDg2MDg5YThkYjAzNzVmDZK9aA==: --dhchap-ctrl-secret DHHC-1:03:ZmMyYTkzYTQ2NjEzZGJhMDI3ODYxYTFhYzM4YWI2MDUwZmUzOGRjNThlNWY3MTJhOTIyY2M2Y2YyNzE0ZDk3N2PNUpc=: 00:42:10.279 14:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:10.279 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:10.279 14:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:42:10.279 14:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:10.279 14:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:10.279 14:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:10.279 14:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:42:10.279 14:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:42:10.279 14:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:42:10.279 14:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:42:10.279 14:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:42:10.279 14:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:42:10.279 14:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:42:10.279 14:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:42:10.279 14:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:10.279 14:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:10.279 14:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:10.279 14:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:10.279 14:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:10.279 14:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:10.279 14:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:10.849 00:42:10.849 14:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:42:10.849 14:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:42:10.849 14:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:10.849 14:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:10.849 14:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:10.849 14:55:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:10.849 14:55:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:10.849 14:55:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:10.849 14:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:42:10.849 { 00:42:10.849 "auth": { 00:42:10.849 "dhgroup": "ffdhe4096", 00:42:10.849 "digest": "sha256", 00:42:10.849 "state": "completed" 00:42:10.849 }, 00:42:10.849 "cntlid": 27, 00:42:10.849 "listen_address": { 00:42:10.849 "adrfam": "IPv4", 00:42:10.849 "traddr": "10.0.0.2", 00:42:10.849 "trsvcid": "4420", 00:42:10.849 "trtype": "TCP" 00:42:10.849 }, 00:42:10.849 "peer_address": { 00:42:10.849 "adrfam": "IPv4", 00:42:10.849 "traddr": "10.0.0.1", 00:42:10.849 "trsvcid": "42290", 00:42:10.849 "trtype": "TCP" 00:42:10.849 }, 00:42:10.849 "qid": 0, 00:42:10.849 "state": "enabled" 00:42:10.849 } 00:42:10.849 ]' 00:42:10.849 14:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:42:11.108 14:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:11.108 14:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:42:11.108 14:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:42:11.108 14:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:42:11.108 14:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:11.108 14:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:11.108 14:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:11.367 14:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:01:ZTNjYjU0NDg1MDk3NTViZDMxYzJhOTIxMDQxYTVlMjSqZTCF: --dhchap-ctrl-secret DHHC-1:02:YWMzZjc2MTU0NzAwZWYwMmVmMmZhMzRhYmM0NDhiZTcxMGVjNTZmMmQ1NmIxMzU53N4GZA==: 00:42:11.962 14:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:11.962 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:11.962 14:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:42:11.962 14:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:11.962 14:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:11.962 14:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:11.962 14:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:42:11.962 14:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:42:11.962 14:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:42:11.962 14:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:42:11.962 14:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:42:11.962 14:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:42:11.962 14:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:42:11.962 14:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:42:11.962 14:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:11.962 14:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:11.962 14:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:11.962 14:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:11.962 14:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:11.962 14:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:11.962 14:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:12.531 00:42:12.531 14:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:42:12.531 14:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:12.531 14:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:42:12.531 14:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:12.531 14:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:12.531 14:55:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:12.531 14:55:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:12.531 14:55:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:12.531 14:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:42:12.531 { 00:42:12.531 "auth": { 00:42:12.531 "dhgroup": "ffdhe4096", 00:42:12.531 "digest": "sha256", 00:42:12.531 "state": "completed" 00:42:12.531 }, 00:42:12.531 "cntlid": 29, 00:42:12.531 "listen_address": { 00:42:12.531 "adrfam": "IPv4", 00:42:12.531 "traddr": "10.0.0.2", 00:42:12.531 "trsvcid": "4420", 00:42:12.531 "trtype": "TCP" 00:42:12.531 }, 00:42:12.531 "peer_address": { 00:42:12.531 "adrfam": "IPv4", 00:42:12.531 "traddr": "10.0.0.1", 00:42:12.531 "trsvcid": "55056", 00:42:12.532 "trtype": "TCP" 00:42:12.532 }, 00:42:12.532 "qid": 0, 00:42:12.532 "state": "enabled" 00:42:12.532 } 00:42:12.532 ]' 00:42:12.532 14:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:42:12.532 14:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:12.532 14:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:42:12.792 14:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:42:12.792 14:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:42:12.792 14:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:12.792 14:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:12.792 14:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:13.051 14:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:02:NjQ4YzA4NGQ4YjdlMTFjN2Q1MzIzNjgwZjVlZmMwMzhmYzEwNDZlYTM1MmFlMGNhRA/YTw==: --dhchap-ctrl-secret DHHC-1:01:NGU5MTI2ZDA0NGJmMGQ4YTlhYjViYTc5NWEwMzBmYjT3AH/i: 00:42:13.620 14:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:13.620 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:13.620 14:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:42:13.620 14:55:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:13.620 14:55:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:13.620 14:55:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:13.620 14:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:42:13.620 14:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:42:13.620 14:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:42:13.620 14:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:42:13.620 14:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:42:13.620 14:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:42:13.620 14:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:42:13.620 14:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:42:13.620 14:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:13.620 14:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key3 00:42:13.620 14:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:13.620 14:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:13.620 14:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:13.620 14:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:42:13.620 14:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:42:13.880 00:42:14.140 14:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:42:14.140 14:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:42:14.140 14:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:14.140 14:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:14.140 14:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:14.140 14:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:14.140 14:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:14.140 14:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:14.140 14:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:42:14.140 { 00:42:14.140 "auth": { 00:42:14.140 "dhgroup": "ffdhe4096", 00:42:14.140 "digest": "sha256", 00:42:14.140 "state": "completed" 00:42:14.140 }, 00:42:14.140 "cntlid": 31, 00:42:14.140 "listen_address": { 00:42:14.140 "adrfam": "IPv4", 00:42:14.140 "traddr": "10.0.0.2", 00:42:14.140 "trsvcid": "4420", 00:42:14.140 "trtype": "TCP" 00:42:14.140 }, 00:42:14.140 "peer_address": { 00:42:14.140 "adrfam": "IPv4", 00:42:14.140 "traddr": "10.0.0.1", 00:42:14.140 "trsvcid": "55086", 00:42:14.140 "trtype": "TCP" 00:42:14.140 }, 00:42:14.140 "qid": 0, 00:42:14.140 "state": "enabled" 00:42:14.140 } 00:42:14.140 ]' 00:42:14.140 14:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:42:14.399 14:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:14.399 14:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:42:14.399 14:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:42:14.399 14:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:42:14.399 14:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:14.399 14:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:14.399 14:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:14.659 14:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:03:MDc3MDZlZDdlNzJhZWQyM2RkODI2YzU4NjM4NGJmM2MyOWMzN2ViMGNlMDQzMTUyMzc0MTQyMGJjY2UwMWJhMPYII3I=: 00:42:15.227 14:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:15.227 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:15.227 14:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:42:15.227 14:55:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:15.227 14:55:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:15.227 14:55:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:15.227 14:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:42:15.227 14:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:42:15.227 14:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:42:15.227 14:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:42:15.486 14:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:42:15.486 14:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:42:15.486 14:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:42:15.486 14:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:42:15.486 14:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:42:15.486 14:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:15.486 14:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:15.486 14:55:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:15.486 14:55:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:15.486 14:55:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:15.486 14:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:15.486 14:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:15.744 00:42:15.744 14:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:42:15.744 14:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:15.744 14:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:42:16.002 14:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:16.002 14:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:16.002 14:55:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:16.002 14:55:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:16.002 14:55:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:16.002 14:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:42:16.002 { 00:42:16.002 "auth": { 00:42:16.002 "dhgroup": "ffdhe6144", 00:42:16.002 "digest": "sha256", 00:42:16.002 "state": "completed" 00:42:16.002 }, 00:42:16.002 "cntlid": 33, 00:42:16.002 "listen_address": { 00:42:16.002 "adrfam": "IPv4", 00:42:16.002 "traddr": "10.0.0.2", 00:42:16.002 "trsvcid": "4420", 00:42:16.002 "trtype": "TCP" 00:42:16.002 }, 00:42:16.002 "peer_address": { 00:42:16.002 "adrfam": "IPv4", 00:42:16.002 "traddr": "10.0.0.1", 00:42:16.002 "trsvcid": "55118", 00:42:16.002 "trtype": "TCP" 00:42:16.002 }, 00:42:16.002 "qid": 0, 00:42:16.002 "state": "enabled" 00:42:16.002 } 00:42:16.002 ]' 00:42:16.002 14:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:42:16.002 14:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:16.002 14:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:42:16.002 14:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:42:16.002 14:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:42:16.260 14:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:16.260 14:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:16.261 14:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:16.519 14:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:00:OTcxYzUzZGZkNGQyYWRkYmVmNTEyNDU4MDE3ZmUyN2ZkMDg2MDg5YThkYjAzNzVmDZK9aA==: --dhchap-ctrl-secret DHHC-1:03:ZmMyYTkzYTQ2NjEzZGJhMDI3ODYxYTFhYzM4YWI2MDUwZmUzOGRjNThlNWY3MTJhOTIyY2M2Y2YyNzE0ZDk3N2PNUpc=: 00:42:17.086 14:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:17.086 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:17.086 14:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:42:17.086 14:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:17.086 14:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:17.086 14:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:17.086 14:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:42:17.086 14:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:42:17.086 14:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:42:17.086 14:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:42:17.086 14:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:42:17.086 14:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:42:17.086 14:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:42:17.086 14:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:42:17.086 14:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:17.086 14:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:17.086 14:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:17.086 14:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:17.086 14:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:17.086 14:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:17.086 14:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:17.653 00:42:17.653 14:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:42:17.653 14:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:17.653 14:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:42:17.653 14:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:17.653 14:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:17.653 14:55:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:17.653 14:55:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:17.653 14:55:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:17.653 14:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:42:17.653 { 00:42:17.653 "auth": { 00:42:17.653 "dhgroup": "ffdhe6144", 00:42:17.653 "digest": "sha256", 00:42:17.653 "state": "completed" 00:42:17.653 }, 00:42:17.653 "cntlid": 35, 00:42:17.653 "listen_address": { 00:42:17.653 "adrfam": "IPv4", 00:42:17.653 "traddr": "10.0.0.2", 00:42:17.653 "trsvcid": "4420", 00:42:17.653 "trtype": "TCP" 00:42:17.653 }, 00:42:17.653 "peer_address": { 00:42:17.653 "adrfam": "IPv4", 00:42:17.653 "traddr": "10.0.0.1", 00:42:17.653 "trsvcid": "55158", 00:42:17.653 "trtype": "TCP" 00:42:17.653 }, 00:42:17.653 "qid": 0, 00:42:17.653 "state": "enabled" 00:42:17.653 } 00:42:17.653 ]' 00:42:17.653 14:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:42:17.912 14:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:17.912 14:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:42:17.912 14:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:42:17.912 14:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:42:17.912 14:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:17.912 14:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:17.912 14:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:18.171 14:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:01:ZTNjYjU0NDg1MDk3NTViZDMxYzJhOTIxMDQxYTVlMjSqZTCF: --dhchap-ctrl-secret DHHC-1:02:YWMzZjc2MTU0NzAwZWYwMmVmMmZhMzRhYmM0NDhiZTcxMGVjNTZmMmQ1NmIxMzU53N4GZA==: 00:42:18.741 14:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:18.741 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:18.741 14:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:42:18.741 14:55:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:18.741 14:55:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:18.741 14:55:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:18.741 14:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:42:18.741 14:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:42:18.741 14:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:42:18.741 14:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:42:18.741 14:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:42:18.741 14:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:42:18.741 14:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:42:18.741 14:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:42:18.741 14:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:18.741 14:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:18.741 14:55:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:18.741 14:55:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:19.000 14:55:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:19.000 14:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:19.000 14:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:19.260 00:42:19.260 14:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:42:19.260 14:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:19.260 14:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:42:19.519 14:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:19.519 14:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:19.519 14:55:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:19.519 14:55:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:19.519 14:55:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:19.519 14:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:42:19.519 { 00:42:19.519 "auth": { 00:42:19.519 "dhgroup": "ffdhe6144", 00:42:19.519 "digest": "sha256", 00:42:19.519 "state": "completed" 00:42:19.519 }, 00:42:19.519 "cntlid": 37, 00:42:19.519 "listen_address": { 00:42:19.519 "adrfam": "IPv4", 00:42:19.519 "traddr": "10.0.0.2", 00:42:19.519 "trsvcid": "4420", 00:42:19.519 "trtype": "TCP" 00:42:19.519 }, 00:42:19.519 "peer_address": { 00:42:19.519 "adrfam": "IPv4", 00:42:19.519 "traddr": "10.0.0.1", 00:42:19.519 "trsvcid": "55202", 00:42:19.519 "trtype": "TCP" 00:42:19.519 }, 00:42:19.519 "qid": 0, 00:42:19.519 "state": "enabled" 00:42:19.519 } 00:42:19.519 ]' 00:42:19.519 14:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:42:19.519 14:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:19.519 14:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:42:19.519 14:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:42:19.519 14:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:42:19.519 14:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:19.519 14:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:19.519 14:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:19.807 14:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:02:NjQ4YzA4NGQ4YjdlMTFjN2Q1MzIzNjgwZjVlZmMwMzhmYzEwNDZlYTM1MmFlMGNhRA/YTw==: --dhchap-ctrl-secret DHHC-1:01:NGU5MTI2ZDA0NGJmMGQ4YTlhYjViYTc5NWEwMzBmYjT3AH/i: 00:42:20.374 14:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:20.374 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:20.374 14:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:42:20.374 14:55:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:20.374 14:55:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:20.374 14:55:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:20.374 14:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:42:20.374 14:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:42:20.374 14:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:42:20.634 14:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:42:20.634 14:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:42:20.634 14:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:42:20.634 14:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:42:20.634 14:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:42:20.634 14:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:20.634 14:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key3 00:42:20.634 14:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:20.634 14:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:20.634 14:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:20.634 14:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:42:20.634 14:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:42:20.893 00:42:20.893 14:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:42:20.893 14:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:20.893 14:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:42:21.153 14:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:21.153 14:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:21.153 14:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:21.153 14:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:21.153 14:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:21.153 14:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:42:21.153 { 00:42:21.153 "auth": { 00:42:21.153 "dhgroup": "ffdhe6144", 00:42:21.153 "digest": "sha256", 00:42:21.153 "state": "completed" 00:42:21.153 }, 00:42:21.153 "cntlid": 39, 00:42:21.153 "listen_address": { 00:42:21.153 "adrfam": "IPv4", 00:42:21.153 "traddr": "10.0.0.2", 00:42:21.153 "trsvcid": "4420", 00:42:21.153 "trtype": "TCP" 00:42:21.153 }, 00:42:21.153 "peer_address": { 00:42:21.153 "adrfam": "IPv4", 00:42:21.153 "traddr": "10.0.0.1", 00:42:21.153 "trsvcid": "55226", 00:42:21.153 "trtype": "TCP" 00:42:21.153 }, 00:42:21.153 "qid": 0, 00:42:21.153 "state": "enabled" 00:42:21.153 } 00:42:21.153 ]' 00:42:21.153 14:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:42:21.153 14:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:21.153 14:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:42:21.153 14:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:42:21.153 14:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:42:21.153 14:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:21.153 14:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:21.153 14:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:21.413 14:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:03:MDc3MDZlZDdlNzJhZWQyM2RkODI2YzU4NjM4NGJmM2MyOWMzN2ViMGNlMDQzMTUyMzc0MTQyMGJjY2UwMWJhMPYII3I=: 00:42:21.979 14:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:21.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:21.979 14:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:42:21.979 14:55:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:21.979 14:55:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:21.979 14:55:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:21.979 14:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:42:21.979 14:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:42:21.979 14:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:42:21.979 14:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:42:22.239 14:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:42:22.239 14:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:42:22.239 14:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:42:22.239 14:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:42:22.239 14:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:42:22.239 14:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:22.239 14:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:22.239 14:55:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:22.239 14:55:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:22.239 14:55:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:22.239 14:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:22.239 14:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:22.808 00:42:22.808 14:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:42:22.808 14:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:22.808 14:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:42:23.067 14:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:23.067 14:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:23.067 14:55:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:23.067 14:55:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:23.067 14:55:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:23.067 14:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:42:23.067 { 00:42:23.067 "auth": { 00:42:23.067 "dhgroup": "ffdhe8192", 00:42:23.067 "digest": "sha256", 00:42:23.067 "state": "completed" 00:42:23.067 }, 00:42:23.067 "cntlid": 41, 00:42:23.067 "listen_address": { 00:42:23.067 "adrfam": "IPv4", 00:42:23.067 "traddr": "10.0.0.2", 00:42:23.067 "trsvcid": "4420", 00:42:23.067 "trtype": "TCP" 00:42:23.067 }, 00:42:23.068 "peer_address": { 00:42:23.068 "adrfam": "IPv4", 00:42:23.068 "traddr": "10.0.0.1", 00:42:23.068 "trsvcid": "36194", 00:42:23.068 "trtype": "TCP" 00:42:23.068 }, 00:42:23.068 "qid": 0, 00:42:23.068 "state": "enabled" 00:42:23.068 } 00:42:23.068 ]' 00:42:23.068 14:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:42:23.068 14:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:23.068 14:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:42:23.068 14:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:42:23.068 14:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:42:23.068 14:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:23.068 14:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:23.068 14:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:23.327 14:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:00:OTcxYzUzZGZkNGQyYWRkYmVmNTEyNDU4MDE3ZmUyN2ZkMDg2MDg5YThkYjAzNzVmDZK9aA==: --dhchap-ctrl-secret DHHC-1:03:ZmMyYTkzYTQ2NjEzZGJhMDI3ODYxYTFhYzM4YWI2MDUwZmUzOGRjNThlNWY3MTJhOTIyY2M2Y2YyNzE0ZDk3N2PNUpc=: 00:42:23.895 14:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:23.895 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:23.895 14:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:42:23.895 14:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:23.895 14:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:23.895 14:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:23.895 14:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:42:23.895 14:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:42:23.895 14:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:42:24.153 14:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:42:24.153 14:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:42:24.153 14:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:42:24.153 14:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:42:24.153 14:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:42:24.153 14:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:24.153 14:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:24.153 14:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:24.153 14:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:24.153 14:55:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:24.153 14:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:24.153 14:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:24.719 00:42:24.719 14:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:42:24.719 14:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:42:24.719 14:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:24.977 14:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:24.977 14:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:24.977 14:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:24.977 14:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:24.978 14:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:24.978 14:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:42:24.978 { 00:42:24.978 "auth": { 00:42:24.978 "dhgroup": "ffdhe8192", 00:42:24.978 "digest": "sha256", 00:42:24.978 "state": "completed" 00:42:24.978 }, 00:42:24.978 "cntlid": 43, 00:42:24.978 "listen_address": { 00:42:24.978 "adrfam": "IPv4", 00:42:24.978 "traddr": "10.0.0.2", 00:42:24.978 "trsvcid": "4420", 00:42:24.978 "trtype": "TCP" 00:42:24.978 }, 00:42:24.978 "peer_address": { 00:42:24.978 "adrfam": "IPv4", 00:42:24.978 "traddr": "10.0.0.1", 00:42:24.978 "trsvcid": "36212", 00:42:24.978 "trtype": "TCP" 00:42:24.978 }, 00:42:24.978 "qid": 0, 00:42:24.978 "state": "enabled" 00:42:24.978 } 00:42:24.978 ]' 00:42:24.978 14:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:42:24.978 14:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:24.978 14:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:42:24.978 14:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:42:24.978 14:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:42:24.978 14:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:24.978 14:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:24.978 14:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:25.236 14:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:01:ZTNjYjU0NDg1MDk3NTViZDMxYzJhOTIxMDQxYTVlMjSqZTCF: --dhchap-ctrl-secret DHHC-1:02:YWMzZjc2MTU0NzAwZWYwMmVmMmZhMzRhYmM0NDhiZTcxMGVjNTZmMmQ1NmIxMzU53N4GZA==: 00:42:25.803 14:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:25.803 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:25.803 14:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:42:25.803 14:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:25.803 14:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:25.803 14:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:25.803 14:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:42:25.803 14:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:42:25.803 14:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:42:26.062 14:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:42:26.062 14:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:42:26.062 14:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:42:26.062 14:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:42:26.062 14:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:42:26.062 14:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:26.062 14:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:26.062 14:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:26.062 14:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:26.062 14:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:26.062 14:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:26.062 14:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:26.630 00:42:26.630 14:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:42:26.630 14:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:42:26.630 14:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:26.889 14:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:26.889 14:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:26.889 14:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:26.889 14:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:26.889 14:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:26.889 14:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:42:26.889 { 00:42:26.889 "auth": { 00:42:26.889 "dhgroup": "ffdhe8192", 00:42:26.889 "digest": "sha256", 00:42:26.889 "state": "completed" 00:42:26.889 }, 00:42:26.889 "cntlid": 45, 00:42:26.889 "listen_address": { 00:42:26.889 "adrfam": "IPv4", 00:42:26.889 "traddr": "10.0.0.2", 00:42:26.889 "trsvcid": "4420", 00:42:26.889 "trtype": "TCP" 00:42:26.889 }, 00:42:26.889 "peer_address": { 00:42:26.889 "adrfam": "IPv4", 00:42:26.889 "traddr": "10.0.0.1", 00:42:26.889 "trsvcid": "36250", 00:42:26.889 "trtype": "TCP" 00:42:26.889 }, 00:42:26.889 "qid": 0, 00:42:26.889 "state": "enabled" 00:42:26.889 } 00:42:26.889 ]' 00:42:26.889 14:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:42:26.889 14:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:26.889 14:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:42:26.889 14:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:42:26.889 14:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:42:26.889 14:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:26.889 14:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:26.889 14:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:27.148 14:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:02:NjQ4YzA4NGQ4YjdlMTFjN2Q1MzIzNjgwZjVlZmMwMzhmYzEwNDZlYTM1MmFlMGNhRA/YTw==: --dhchap-ctrl-secret DHHC-1:01:NGU5MTI2ZDA0NGJmMGQ4YTlhYjViYTc5NWEwMzBmYjT3AH/i: 00:42:27.722 14:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:27.722 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:27.722 14:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:42:27.722 14:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:27.722 14:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:27.722 14:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:27.722 14:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:42:27.722 14:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:42:27.722 14:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:42:27.982 14:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:42:27.982 14:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:42:27.982 14:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:42:27.982 14:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:42:27.982 14:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:42:27.982 14:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:27.982 14:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key3 00:42:27.982 14:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:27.982 14:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:27.982 14:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:27.982 14:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:42:27.982 14:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:42:28.550 00:42:28.550 14:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:42:28.550 14:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:42:28.550 14:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:28.550 14:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:28.550 14:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:28.550 14:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:28.550 14:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:28.550 14:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:28.550 14:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:42:28.550 { 00:42:28.550 "auth": { 00:42:28.550 "dhgroup": "ffdhe8192", 00:42:28.550 "digest": "sha256", 00:42:28.550 "state": "completed" 00:42:28.550 }, 00:42:28.550 "cntlid": 47, 00:42:28.550 "listen_address": { 00:42:28.550 "adrfam": "IPv4", 00:42:28.550 "traddr": "10.0.0.2", 00:42:28.550 "trsvcid": "4420", 00:42:28.550 "trtype": "TCP" 00:42:28.550 }, 00:42:28.550 "peer_address": { 00:42:28.550 "adrfam": "IPv4", 00:42:28.550 "traddr": "10.0.0.1", 00:42:28.550 "trsvcid": "36272", 00:42:28.550 "trtype": "TCP" 00:42:28.550 }, 00:42:28.550 "qid": 0, 00:42:28.550 "state": "enabled" 00:42:28.550 } 00:42:28.550 ]' 00:42:28.550 14:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:42:28.550 14:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:42:28.550 14:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:42:28.809 14:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:42:28.809 14:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:42:28.809 14:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:28.809 14:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:28.809 14:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:29.069 14:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:03:MDc3MDZlZDdlNzJhZWQyM2RkODI2YzU4NjM4NGJmM2MyOWMzN2ViMGNlMDQzMTUyMzc0MTQyMGJjY2UwMWJhMPYII3I=: 00:42:29.638 14:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:29.638 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:29.638 14:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:42:29.638 14:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:29.638 14:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:29.638 14:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:29.638 14:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:42:29.638 14:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:42:29.638 14:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:42:29.638 14:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:42:29.638 14:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:42:29.898 14:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:42:29.898 14:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:42:29.898 14:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:42:29.898 14:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:42:29.898 14:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:42:29.898 14:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:29.898 14:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:29.898 14:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:29.898 14:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:29.898 14:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:29.898 14:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:29.898 14:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:30.156 00:42:30.156 14:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:42:30.156 14:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:42:30.156 14:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:30.156 14:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:30.156 14:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:30.156 14:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:30.156 14:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:30.415 14:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:30.415 14:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:42:30.415 { 00:42:30.415 "auth": { 00:42:30.415 "dhgroup": "null", 00:42:30.415 "digest": "sha384", 00:42:30.415 "state": "completed" 00:42:30.415 }, 00:42:30.415 "cntlid": 49, 00:42:30.415 "listen_address": { 00:42:30.415 "adrfam": "IPv4", 00:42:30.415 "traddr": "10.0.0.2", 00:42:30.415 "trsvcid": "4420", 00:42:30.415 "trtype": "TCP" 00:42:30.415 }, 00:42:30.415 "peer_address": { 00:42:30.415 "adrfam": "IPv4", 00:42:30.415 "traddr": "10.0.0.1", 00:42:30.415 "trsvcid": "36308", 00:42:30.415 "trtype": "TCP" 00:42:30.415 }, 00:42:30.415 "qid": 0, 00:42:30.415 "state": "enabled" 00:42:30.415 } 00:42:30.415 ]' 00:42:30.415 14:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:42:30.415 14:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:42:30.415 14:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:42:30.415 14:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:42:30.415 14:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:42:30.415 14:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:30.415 14:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:30.415 14:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:30.673 14:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:00:OTcxYzUzZGZkNGQyYWRkYmVmNTEyNDU4MDE3ZmUyN2ZkMDg2MDg5YThkYjAzNzVmDZK9aA==: --dhchap-ctrl-secret DHHC-1:03:ZmMyYTkzYTQ2NjEzZGJhMDI3ODYxYTFhYzM4YWI2MDUwZmUzOGRjNThlNWY3MTJhOTIyY2M2Y2YyNzE0ZDk3N2PNUpc=: 00:42:31.241 14:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:31.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:31.241 14:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:42:31.241 14:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:31.241 14:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:31.241 14:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:31.241 14:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:42:31.241 14:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:42:31.242 14:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:42:31.500 14:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:42:31.501 14:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:42:31.501 14:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:42:31.501 14:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:42:31.501 14:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:42:31.501 14:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:31.501 14:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:31.501 14:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:31.501 14:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:31.501 14:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:31.501 14:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:31.501 14:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:31.763 00:42:31.763 14:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:42:31.763 14:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:42:31.763 14:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:32.027 14:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:32.027 14:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:32.027 14:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:32.027 14:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:32.027 14:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:32.027 14:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:42:32.027 { 00:42:32.027 "auth": { 00:42:32.027 "dhgroup": "null", 00:42:32.027 "digest": "sha384", 00:42:32.027 "state": "completed" 00:42:32.027 }, 00:42:32.027 "cntlid": 51, 00:42:32.027 "listen_address": { 00:42:32.027 "adrfam": "IPv4", 00:42:32.027 "traddr": "10.0.0.2", 00:42:32.027 "trsvcid": "4420", 00:42:32.027 "trtype": "TCP" 00:42:32.027 }, 00:42:32.027 "peer_address": { 00:42:32.027 "adrfam": "IPv4", 00:42:32.027 "traddr": "10.0.0.1", 00:42:32.027 "trsvcid": "46200", 00:42:32.027 "trtype": "TCP" 00:42:32.027 }, 00:42:32.027 "qid": 0, 00:42:32.027 "state": "enabled" 00:42:32.027 } 00:42:32.027 ]' 00:42:32.027 14:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:42:32.027 14:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:42:32.027 14:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:42:32.027 14:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:42:32.027 14:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:42:32.027 14:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:32.027 14:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:32.027 14:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:32.304 14:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:01:ZTNjYjU0NDg1MDk3NTViZDMxYzJhOTIxMDQxYTVlMjSqZTCF: --dhchap-ctrl-secret DHHC-1:02:YWMzZjc2MTU0NzAwZWYwMmVmMmZhMzRhYmM0NDhiZTcxMGVjNTZmMmQ1NmIxMzU53N4GZA==: 00:42:32.872 14:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:32.872 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:32.872 14:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:42:32.872 14:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:32.872 14:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:32.872 14:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:32.872 14:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:42:32.872 14:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:42:32.872 14:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:42:33.131 14:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:42:33.131 14:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:42:33.131 14:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:42:33.131 14:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:42:33.131 14:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:42:33.131 14:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:33.131 14:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:33.131 14:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:33.131 14:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:33.131 14:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:33.131 14:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:33.131 14:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:33.391 00:42:33.391 14:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:42:33.391 14:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:42:33.391 14:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:33.650 14:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:33.650 14:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:33.650 14:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:33.650 14:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:33.650 14:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:33.650 14:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:42:33.650 { 00:42:33.650 "auth": { 00:42:33.650 "dhgroup": "null", 00:42:33.650 "digest": "sha384", 00:42:33.650 "state": "completed" 00:42:33.650 }, 00:42:33.650 "cntlid": 53, 00:42:33.650 "listen_address": { 00:42:33.650 "adrfam": "IPv4", 00:42:33.650 "traddr": "10.0.0.2", 00:42:33.650 "trsvcid": "4420", 00:42:33.650 "trtype": "TCP" 00:42:33.650 }, 00:42:33.650 "peer_address": { 00:42:33.650 "adrfam": "IPv4", 00:42:33.650 "traddr": "10.0.0.1", 00:42:33.650 "trsvcid": "46232", 00:42:33.650 "trtype": "TCP" 00:42:33.650 }, 00:42:33.650 "qid": 0, 00:42:33.650 "state": "enabled" 00:42:33.650 } 00:42:33.650 ]' 00:42:33.650 14:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:42:33.650 14:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:42:33.650 14:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:42:33.650 14:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:42:33.650 14:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:42:33.650 14:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:33.650 14:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:33.650 14:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:33.909 14:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:02:NjQ4YzA4NGQ4YjdlMTFjN2Q1MzIzNjgwZjVlZmMwMzhmYzEwNDZlYTM1MmFlMGNhRA/YTw==: --dhchap-ctrl-secret DHHC-1:01:NGU5MTI2ZDA0NGJmMGQ4YTlhYjViYTc5NWEwMzBmYjT3AH/i: 00:42:34.478 14:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:34.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:34.478 14:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:42:34.479 14:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:34.479 14:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:34.479 14:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:34.479 14:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:42:34.479 14:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:42:34.479 14:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:42:34.738 14:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:42:34.738 14:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:42:34.738 14:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:42:34.738 14:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:42:34.738 14:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:42:34.738 14:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:34.738 14:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key3 00:42:34.738 14:55:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:34.738 14:55:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:34.738 14:55:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:34.738 14:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:42:34.739 14:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:42:34.998 00:42:34.998 14:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:42:34.998 14:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:42:34.998 14:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:35.258 14:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:35.258 14:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:35.258 14:55:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:35.258 14:55:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:35.258 14:55:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:35.258 14:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:42:35.258 { 00:42:35.258 "auth": { 00:42:35.258 "dhgroup": "null", 00:42:35.258 "digest": "sha384", 00:42:35.258 "state": "completed" 00:42:35.258 }, 00:42:35.258 "cntlid": 55, 00:42:35.258 "listen_address": { 00:42:35.258 "adrfam": "IPv4", 00:42:35.258 "traddr": "10.0.0.2", 00:42:35.258 "trsvcid": "4420", 00:42:35.258 "trtype": "TCP" 00:42:35.258 }, 00:42:35.258 "peer_address": { 00:42:35.258 "adrfam": "IPv4", 00:42:35.258 "traddr": "10.0.0.1", 00:42:35.258 "trsvcid": "46246", 00:42:35.258 "trtype": "TCP" 00:42:35.258 }, 00:42:35.258 "qid": 0, 00:42:35.258 "state": "enabled" 00:42:35.258 } 00:42:35.258 ]' 00:42:35.258 14:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:42:35.258 14:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:42:35.258 14:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:42:35.258 14:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:42:35.258 14:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:42:35.258 14:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:35.258 14:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:35.258 14:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:35.517 14:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:03:MDc3MDZlZDdlNzJhZWQyM2RkODI2YzU4NjM4NGJmM2MyOWMzN2ViMGNlMDQzMTUyMzc0MTQyMGJjY2UwMWJhMPYII3I=: 00:42:36.086 14:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:36.086 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:36.086 14:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:42:36.086 14:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:36.086 14:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:36.086 14:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:36.086 14:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:42:36.086 14:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:42:36.086 14:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:42:36.086 14:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:42:36.346 14:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:42:36.346 14:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:42:36.346 14:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:42:36.346 14:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:42:36.346 14:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:42:36.346 14:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:36.346 14:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:36.346 14:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:36.346 14:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:36.346 14:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:36.346 14:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:36.346 14:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:36.605 00:42:36.605 14:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:42:36.605 14:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:42:36.605 14:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:36.865 14:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:36.865 14:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:36.865 14:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:36.865 14:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:36.865 14:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:36.865 14:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:42:36.865 { 00:42:36.865 "auth": { 00:42:36.865 "dhgroup": "ffdhe2048", 00:42:36.865 "digest": "sha384", 00:42:36.865 "state": "completed" 00:42:36.865 }, 00:42:36.865 "cntlid": 57, 00:42:36.865 "listen_address": { 00:42:36.865 "adrfam": "IPv4", 00:42:36.865 "traddr": "10.0.0.2", 00:42:36.865 "trsvcid": "4420", 00:42:36.865 "trtype": "TCP" 00:42:36.865 }, 00:42:36.865 "peer_address": { 00:42:36.865 "adrfam": "IPv4", 00:42:36.865 "traddr": "10.0.0.1", 00:42:36.865 "trsvcid": "46272", 00:42:36.865 "trtype": "TCP" 00:42:36.865 }, 00:42:36.865 "qid": 0, 00:42:36.865 "state": "enabled" 00:42:36.865 } 00:42:36.865 ]' 00:42:36.865 14:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:42:36.865 14:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:42:36.865 14:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:42:36.865 14:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:42:36.865 14:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:42:36.865 14:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:36.865 14:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:36.865 14:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:37.124 14:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:00:OTcxYzUzZGZkNGQyYWRkYmVmNTEyNDU4MDE3ZmUyN2ZkMDg2MDg5YThkYjAzNzVmDZK9aA==: --dhchap-ctrl-secret DHHC-1:03:ZmMyYTkzYTQ2NjEzZGJhMDI3ODYxYTFhYzM4YWI2MDUwZmUzOGRjNThlNWY3MTJhOTIyY2M2Y2YyNzE0ZDk3N2PNUpc=: 00:42:37.694 14:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:37.694 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:37.694 14:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:42:37.694 14:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:37.694 14:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:37.694 14:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:37.694 14:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:42:37.694 14:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:42:37.694 14:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:42:37.953 14:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:42:37.953 14:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:42:37.953 14:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:42:37.953 14:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:42:37.953 14:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:42:37.953 14:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:37.953 14:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:37.953 14:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:37.953 14:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:37.953 14:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:37.953 14:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:37.953 14:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:38.212 00:42:38.212 14:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:42:38.212 14:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:42:38.212 14:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:38.471 14:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:38.471 14:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:38.471 14:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:38.471 14:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:38.471 14:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:38.471 14:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:42:38.471 { 00:42:38.471 "auth": { 00:42:38.471 "dhgroup": "ffdhe2048", 00:42:38.471 "digest": "sha384", 00:42:38.471 "state": "completed" 00:42:38.471 }, 00:42:38.471 "cntlid": 59, 00:42:38.471 "listen_address": { 00:42:38.471 "adrfam": "IPv4", 00:42:38.471 "traddr": "10.0.0.2", 00:42:38.471 "trsvcid": "4420", 00:42:38.471 "trtype": "TCP" 00:42:38.471 }, 00:42:38.471 "peer_address": { 00:42:38.471 "adrfam": "IPv4", 00:42:38.471 "traddr": "10.0.0.1", 00:42:38.471 "trsvcid": "46298", 00:42:38.471 "trtype": "TCP" 00:42:38.471 }, 00:42:38.471 "qid": 0, 00:42:38.472 "state": "enabled" 00:42:38.472 } 00:42:38.472 ]' 00:42:38.472 14:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:42:38.472 14:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:42:38.472 14:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:42:38.472 14:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:42:38.472 14:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:42:38.472 14:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:38.472 14:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:38.472 14:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:38.730 14:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:01:ZTNjYjU0NDg1MDk3NTViZDMxYzJhOTIxMDQxYTVlMjSqZTCF: --dhchap-ctrl-secret DHHC-1:02:YWMzZjc2MTU0NzAwZWYwMmVmMmZhMzRhYmM0NDhiZTcxMGVjNTZmMmQ1NmIxMzU53N4GZA==: 00:42:39.299 14:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:39.299 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:39.299 14:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:42:39.299 14:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:39.299 14:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:39.299 14:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:39.299 14:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:42:39.299 14:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:42:39.299 14:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:42:39.558 14:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:42:39.558 14:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:42:39.558 14:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:42:39.558 14:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:42:39.558 14:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:42:39.558 14:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:39.558 14:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:39.558 14:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:39.558 14:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:39.558 14:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:39.558 14:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:39.558 14:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:39.818 00:42:39.818 14:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:42:39.818 14:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:42:39.818 14:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:39.818 14:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:39.818 14:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:39.818 14:55:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:39.818 14:55:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:40.077 14:55:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:40.077 14:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:42:40.077 { 00:42:40.077 "auth": { 00:42:40.077 "dhgroup": "ffdhe2048", 00:42:40.077 "digest": "sha384", 00:42:40.077 "state": "completed" 00:42:40.077 }, 00:42:40.077 "cntlid": 61, 00:42:40.077 "listen_address": { 00:42:40.077 "adrfam": "IPv4", 00:42:40.077 "traddr": "10.0.0.2", 00:42:40.077 "trsvcid": "4420", 00:42:40.077 "trtype": "TCP" 00:42:40.077 }, 00:42:40.077 "peer_address": { 00:42:40.077 "adrfam": "IPv4", 00:42:40.077 "traddr": "10.0.0.1", 00:42:40.077 "trsvcid": "46328", 00:42:40.077 "trtype": "TCP" 00:42:40.077 }, 00:42:40.077 "qid": 0, 00:42:40.077 "state": "enabled" 00:42:40.078 } 00:42:40.078 ]' 00:42:40.078 14:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:42:40.078 14:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:42:40.078 14:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:42:40.078 14:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:42:40.078 14:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:42:40.078 14:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:40.078 14:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:40.078 14:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:40.344 14:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:02:NjQ4YzA4NGQ4YjdlMTFjN2Q1MzIzNjgwZjVlZmMwMzhmYzEwNDZlYTM1MmFlMGNhRA/YTw==: --dhchap-ctrl-secret DHHC-1:01:NGU5MTI2ZDA0NGJmMGQ4YTlhYjViYTc5NWEwMzBmYjT3AH/i: 00:42:40.922 14:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:40.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:40.922 14:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:42:40.922 14:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:40.922 14:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:40.922 14:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:40.922 14:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:42:40.922 14:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:42:40.922 14:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:42:41.180 14:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:42:41.180 14:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:42:41.180 14:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:42:41.180 14:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:42:41.180 14:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:42:41.180 14:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:41.180 14:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key3 00:42:41.180 14:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:41.180 14:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:41.180 14:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:41.180 14:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:42:41.180 14:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:42:41.439 00:42:41.439 14:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:42:41.439 14:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:41.439 14:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:42:41.439 14:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:41.439 14:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:41.439 14:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:41.439 14:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:41.439 14:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:41.439 14:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:42:41.439 { 00:42:41.439 "auth": { 00:42:41.439 "dhgroup": "ffdhe2048", 00:42:41.439 "digest": "sha384", 00:42:41.439 "state": "completed" 00:42:41.439 }, 00:42:41.439 "cntlid": 63, 00:42:41.439 "listen_address": { 00:42:41.439 "adrfam": "IPv4", 00:42:41.439 "traddr": "10.0.0.2", 00:42:41.439 "trsvcid": "4420", 00:42:41.439 "trtype": "TCP" 00:42:41.439 }, 00:42:41.439 "peer_address": { 00:42:41.439 "adrfam": "IPv4", 00:42:41.439 "traddr": "10.0.0.1", 00:42:41.439 "trsvcid": "49404", 00:42:41.439 "trtype": "TCP" 00:42:41.439 }, 00:42:41.439 "qid": 0, 00:42:41.439 "state": "enabled" 00:42:41.439 } 00:42:41.439 ]' 00:42:41.439 14:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:42:41.699 14:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:42:41.699 14:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:42:41.699 14:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:42:41.699 14:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:42:41.699 14:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:41.699 14:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:41.699 14:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:41.959 14:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:03:MDc3MDZlZDdlNzJhZWQyM2RkODI2YzU4NjM4NGJmM2MyOWMzN2ViMGNlMDQzMTUyMzc0MTQyMGJjY2UwMWJhMPYII3I=: 00:42:42.527 14:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:42.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:42.528 14:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:42:42.528 14:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:42.528 14:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:42.528 14:56:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:42.528 14:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:42:42.528 14:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:42:42.528 14:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:42:42.528 14:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:42:42.787 14:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:42:42.787 14:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:42:42.787 14:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:42:42.787 14:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:42:42.787 14:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:42:42.787 14:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:42.787 14:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:42.787 14:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:42.787 14:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:42.787 14:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:42.787 14:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:42.787 14:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:43.047 00:42:43.047 14:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:42:43.047 14:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:42:43.047 14:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:43.047 14:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:43.047 14:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:43.047 14:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:43.047 14:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:43.047 14:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:43.047 14:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:42:43.047 { 00:42:43.047 "auth": { 00:42:43.047 "dhgroup": "ffdhe3072", 00:42:43.047 "digest": "sha384", 00:42:43.047 "state": "completed" 00:42:43.047 }, 00:42:43.047 "cntlid": 65, 00:42:43.047 "listen_address": { 00:42:43.047 "adrfam": "IPv4", 00:42:43.047 "traddr": "10.0.0.2", 00:42:43.047 "trsvcid": "4420", 00:42:43.047 "trtype": "TCP" 00:42:43.047 }, 00:42:43.047 "peer_address": { 00:42:43.047 "adrfam": "IPv4", 00:42:43.047 "traddr": "10.0.0.1", 00:42:43.047 "trsvcid": "49426", 00:42:43.047 "trtype": "TCP" 00:42:43.047 }, 00:42:43.047 "qid": 0, 00:42:43.047 "state": "enabled" 00:42:43.047 } 00:42:43.047 ]' 00:42:43.047 14:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:42:43.307 14:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:42:43.307 14:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:42:43.307 14:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:42:43.307 14:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:42:43.307 14:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:43.307 14:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:43.307 14:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:43.565 14:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:00:OTcxYzUzZGZkNGQyYWRkYmVmNTEyNDU4MDE3ZmUyN2ZkMDg2MDg5YThkYjAzNzVmDZK9aA==: --dhchap-ctrl-secret DHHC-1:03:ZmMyYTkzYTQ2NjEzZGJhMDI3ODYxYTFhYzM4YWI2MDUwZmUzOGRjNThlNWY3MTJhOTIyY2M2Y2YyNzE0ZDk3N2PNUpc=: 00:42:44.132 14:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:44.132 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:44.133 14:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:42:44.133 14:56:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:44.133 14:56:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:44.133 14:56:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:44.133 14:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:42:44.133 14:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:42:44.133 14:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:42:44.393 14:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:42:44.393 14:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:42:44.393 14:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:42:44.393 14:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:42:44.393 14:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:42:44.393 14:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:44.393 14:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:44.393 14:56:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:44.393 14:56:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:44.393 14:56:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:44.393 14:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:44.393 14:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:44.658 00:42:44.658 14:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:42:44.658 14:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:44.658 14:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:42:44.658 14:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:44.658 14:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:44.658 14:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:44.658 14:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:44.925 14:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:44.925 14:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:42:44.925 { 00:42:44.925 "auth": { 00:42:44.925 "dhgroup": "ffdhe3072", 00:42:44.925 "digest": "sha384", 00:42:44.925 "state": "completed" 00:42:44.925 }, 00:42:44.925 "cntlid": 67, 00:42:44.925 "listen_address": { 00:42:44.925 "adrfam": "IPv4", 00:42:44.925 "traddr": "10.0.0.2", 00:42:44.925 "trsvcid": "4420", 00:42:44.925 "trtype": "TCP" 00:42:44.925 }, 00:42:44.925 "peer_address": { 00:42:44.925 "adrfam": "IPv4", 00:42:44.925 "traddr": "10.0.0.1", 00:42:44.925 "trsvcid": "49464", 00:42:44.925 "trtype": "TCP" 00:42:44.925 }, 00:42:44.925 "qid": 0, 00:42:44.925 "state": "enabled" 00:42:44.925 } 00:42:44.925 ]' 00:42:44.925 14:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:42:44.925 14:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:42:44.925 14:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:42:44.925 14:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:42:44.925 14:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:42:44.925 14:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:44.925 14:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:44.925 14:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:45.185 14:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:01:ZTNjYjU0NDg1MDk3NTViZDMxYzJhOTIxMDQxYTVlMjSqZTCF: --dhchap-ctrl-secret DHHC-1:02:YWMzZjc2MTU0NzAwZWYwMmVmMmZhMzRhYmM0NDhiZTcxMGVjNTZmMmQ1NmIxMzU53N4GZA==: 00:42:45.753 14:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:45.753 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:45.753 14:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:42:45.753 14:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:45.753 14:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:45.754 14:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:45.754 14:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:42:45.754 14:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:42:45.754 14:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:42:46.013 14:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:42:46.013 14:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:42:46.013 14:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:42:46.013 14:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:42:46.013 14:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:42:46.013 14:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:46.013 14:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:46.013 14:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:46.013 14:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:46.014 14:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:46.014 14:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:46.014 14:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:46.273 00:42:46.273 14:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:42:46.273 14:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:42:46.273 14:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:46.273 14:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:46.534 14:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:46.534 14:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:46.534 14:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:46.534 14:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:46.534 14:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:42:46.534 { 00:42:46.534 "auth": { 00:42:46.534 "dhgroup": "ffdhe3072", 00:42:46.534 "digest": "sha384", 00:42:46.534 "state": "completed" 00:42:46.534 }, 00:42:46.534 "cntlid": 69, 00:42:46.534 "listen_address": { 00:42:46.534 "adrfam": "IPv4", 00:42:46.534 "traddr": "10.0.0.2", 00:42:46.534 "trsvcid": "4420", 00:42:46.534 "trtype": "TCP" 00:42:46.534 }, 00:42:46.534 "peer_address": { 00:42:46.534 "adrfam": "IPv4", 00:42:46.534 "traddr": "10.0.0.1", 00:42:46.534 "trsvcid": "49494", 00:42:46.534 "trtype": "TCP" 00:42:46.534 }, 00:42:46.534 "qid": 0, 00:42:46.534 "state": "enabled" 00:42:46.534 } 00:42:46.534 ]' 00:42:46.534 14:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:42:46.534 14:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:42:46.534 14:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:42:46.534 14:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:42:46.534 14:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:42:46.535 14:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:46.535 14:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:46.535 14:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:46.794 14:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:02:NjQ4YzA4NGQ4YjdlMTFjN2Q1MzIzNjgwZjVlZmMwMzhmYzEwNDZlYTM1MmFlMGNhRA/YTw==: --dhchap-ctrl-secret DHHC-1:01:NGU5MTI2ZDA0NGJmMGQ4YTlhYjViYTc5NWEwMzBmYjT3AH/i: 00:42:47.363 14:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:47.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:47.363 14:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:42:47.363 14:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:47.363 14:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:47.363 14:56:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:47.363 14:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:42:47.363 14:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:42:47.363 14:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:42:47.623 14:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:42:47.623 14:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:42:47.623 14:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:42:47.623 14:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:42:47.623 14:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:42:47.623 14:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:47.623 14:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key3 00:42:47.623 14:56:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:47.623 14:56:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:47.623 14:56:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:47.623 14:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:42:47.623 14:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:42:47.883 00:42:47.883 14:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:42:47.883 14:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:42:47.883 14:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:48.143 14:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:48.143 14:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:48.143 14:56:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:48.143 14:56:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:48.143 14:56:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:48.143 14:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:42:48.143 { 00:42:48.143 "auth": { 00:42:48.143 "dhgroup": "ffdhe3072", 00:42:48.143 "digest": "sha384", 00:42:48.143 "state": "completed" 00:42:48.143 }, 00:42:48.143 "cntlid": 71, 00:42:48.143 "listen_address": { 00:42:48.143 "adrfam": "IPv4", 00:42:48.143 "traddr": "10.0.0.2", 00:42:48.143 "trsvcid": "4420", 00:42:48.143 "trtype": "TCP" 00:42:48.143 }, 00:42:48.143 "peer_address": { 00:42:48.143 "adrfam": "IPv4", 00:42:48.143 "traddr": "10.0.0.1", 00:42:48.143 "trsvcid": "49526", 00:42:48.143 "trtype": "TCP" 00:42:48.143 }, 00:42:48.143 "qid": 0, 00:42:48.143 "state": "enabled" 00:42:48.143 } 00:42:48.143 ]' 00:42:48.143 14:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:42:48.143 14:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:42:48.143 14:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:42:48.143 14:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:42:48.143 14:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:42:48.143 14:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:48.143 14:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:48.143 14:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:48.403 14:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:03:MDc3MDZlZDdlNzJhZWQyM2RkODI2YzU4NjM4NGJmM2MyOWMzN2ViMGNlMDQzMTUyMzc0MTQyMGJjY2UwMWJhMPYII3I=: 00:42:48.979 14:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:48.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:48.979 14:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:42:48.979 14:56:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:48.979 14:56:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:48.979 14:56:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:48.979 14:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:42:48.979 14:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:42:48.979 14:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:42:48.979 14:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:42:49.256 14:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:42:49.256 14:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:42:49.256 14:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:42:49.256 14:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:42:49.256 14:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:42:49.256 14:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:49.256 14:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:49.256 14:56:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:49.256 14:56:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:49.256 14:56:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:49.256 14:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:49.256 14:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:49.515 00:42:49.515 14:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:42:49.515 14:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:49.515 14:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:42:49.774 14:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:49.774 14:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:49.774 14:56:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:49.774 14:56:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:49.775 14:56:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:49.775 14:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:42:49.775 { 00:42:49.775 "auth": { 00:42:49.775 "dhgroup": "ffdhe4096", 00:42:49.775 "digest": "sha384", 00:42:49.775 "state": "completed" 00:42:49.775 }, 00:42:49.775 "cntlid": 73, 00:42:49.775 "listen_address": { 00:42:49.775 "adrfam": "IPv4", 00:42:49.775 "traddr": "10.0.0.2", 00:42:49.775 "trsvcid": "4420", 00:42:49.775 "trtype": "TCP" 00:42:49.775 }, 00:42:49.775 "peer_address": { 00:42:49.775 "adrfam": "IPv4", 00:42:49.775 "traddr": "10.0.0.1", 00:42:49.775 "trsvcid": "49546", 00:42:49.775 "trtype": "TCP" 00:42:49.775 }, 00:42:49.775 "qid": 0, 00:42:49.775 "state": "enabled" 00:42:49.775 } 00:42:49.775 ]' 00:42:49.775 14:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:42:49.775 14:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:42:49.775 14:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:42:49.775 14:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:42:49.775 14:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:42:49.775 14:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:49.775 14:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:49.775 14:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:50.034 14:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:00:OTcxYzUzZGZkNGQyYWRkYmVmNTEyNDU4MDE3ZmUyN2ZkMDg2MDg5YThkYjAzNzVmDZK9aA==: --dhchap-ctrl-secret DHHC-1:03:ZmMyYTkzYTQ2NjEzZGJhMDI3ODYxYTFhYzM4YWI2MDUwZmUzOGRjNThlNWY3MTJhOTIyY2M2Y2YyNzE0ZDk3N2PNUpc=: 00:42:50.603 14:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:50.603 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:50.603 14:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:42:50.603 14:56:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:50.603 14:56:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:50.603 14:56:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:50.603 14:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:42:50.603 14:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:42:50.603 14:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:42:50.862 14:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:42:50.862 14:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:42:50.862 14:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:42:50.862 14:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:42:50.862 14:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:42:50.862 14:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:50.862 14:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:50.862 14:56:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:50.862 14:56:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:50.862 14:56:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:50.862 14:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:50.862 14:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:51.122 00:42:51.122 14:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:42:51.122 14:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:42:51.122 14:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:51.381 14:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:51.381 14:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:51.381 14:56:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:51.381 14:56:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:51.381 14:56:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:51.381 14:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:42:51.381 { 00:42:51.381 "auth": { 00:42:51.381 "dhgroup": "ffdhe4096", 00:42:51.381 "digest": "sha384", 00:42:51.381 "state": "completed" 00:42:51.381 }, 00:42:51.381 "cntlid": 75, 00:42:51.381 "listen_address": { 00:42:51.381 "adrfam": "IPv4", 00:42:51.381 "traddr": "10.0.0.2", 00:42:51.381 "trsvcid": "4420", 00:42:51.381 "trtype": "TCP" 00:42:51.381 }, 00:42:51.381 "peer_address": { 00:42:51.381 "adrfam": "IPv4", 00:42:51.381 "traddr": "10.0.0.1", 00:42:51.381 "trsvcid": "49576", 00:42:51.381 "trtype": "TCP" 00:42:51.381 }, 00:42:51.381 "qid": 0, 00:42:51.381 "state": "enabled" 00:42:51.381 } 00:42:51.381 ]' 00:42:51.381 14:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:42:51.381 14:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:42:51.381 14:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:42:51.381 14:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:42:51.381 14:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:42:51.381 14:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:51.381 14:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:51.381 14:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:51.641 14:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:01:ZTNjYjU0NDg1MDk3NTViZDMxYzJhOTIxMDQxYTVlMjSqZTCF: --dhchap-ctrl-secret DHHC-1:02:YWMzZjc2MTU0NzAwZWYwMmVmMmZhMzRhYmM0NDhiZTcxMGVjNTZmMmQ1NmIxMzU53N4GZA==: 00:42:52.210 14:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:52.210 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:52.210 14:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:42:52.210 14:56:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:52.210 14:56:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:52.210 14:56:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:52.210 14:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:42:52.210 14:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:42:52.210 14:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:42:52.469 14:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:42:52.470 14:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:42:52.470 14:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:42:52.470 14:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:42:52.470 14:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:42:52.470 14:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:52.470 14:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:52.470 14:56:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:52.470 14:56:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:52.470 14:56:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:52.470 14:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:52.470 14:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:52.731 00:42:52.731 14:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:42:52.731 14:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:52.731 14:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:42:52.990 14:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:52.990 14:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:52.990 14:56:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:52.990 14:56:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:52.990 14:56:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:52.990 14:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:42:52.990 { 00:42:52.990 "auth": { 00:42:52.990 "dhgroup": "ffdhe4096", 00:42:52.990 "digest": "sha384", 00:42:52.990 "state": "completed" 00:42:52.990 }, 00:42:52.990 "cntlid": 77, 00:42:52.990 "listen_address": { 00:42:52.990 "adrfam": "IPv4", 00:42:52.990 "traddr": "10.0.0.2", 00:42:52.990 "trsvcid": "4420", 00:42:52.990 "trtype": "TCP" 00:42:52.990 }, 00:42:52.990 "peer_address": { 00:42:52.990 "adrfam": "IPv4", 00:42:52.990 "traddr": "10.0.0.1", 00:42:52.990 "trsvcid": "33140", 00:42:52.990 "trtype": "TCP" 00:42:52.990 }, 00:42:52.990 "qid": 0, 00:42:52.990 "state": "enabled" 00:42:52.990 } 00:42:52.990 ]' 00:42:52.990 14:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:42:52.990 14:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:42:52.990 14:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:42:53.250 14:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:42:53.250 14:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:42:53.250 14:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:53.250 14:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:53.250 14:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:53.250 14:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:02:NjQ4YzA4NGQ4YjdlMTFjN2Q1MzIzNjgwZjVlZmMwMzhmYzEwNDZlYTM1MmFlMGNhRA/YTw==: --dhchap-ctrl-secret DHHC-1:01:NGU5MTI2ZDA0NGJmMGQ4YTlhYjViYTc5NWEwMzBmYjT3AH/i: 00:42:53.817 14:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:53.817 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:53.817 14:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:42:53.817 14:56:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:53.817 14:56:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:54.077 14:56:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:54.077 14:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:42:54.077 14:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:42:54.077 14:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:42:54.077 14:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:42:54.077 14:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:42:54.077 14:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:42:54.077 14:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:42:54.077 14:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:42:54.077 14:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:54.077 14:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key3 00:42:54.077 14:56:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:54.077 14:56:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:54.077 14:56:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:54.077 14:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:42:54.077 14:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:42:54.646 00:42:54.646 14:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:42:54.646 14:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:54.646 14:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:42:54.646 14:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:54.646 14:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:54.646 14:56:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:54.646 14:56:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:54.646 14:56:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:54.646 14:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:42:54.646 { 00:42:54.646 "auth": { 00:42:54.646 "dhgroup": "ffdhe4096", 00:42:54.646 "digest": "sha384", 00:42:54.646 "state": "completed" 00:42:54.646 }, 00:42:54.646 "cntlid": 79, 00:42:54.646 "listen_address": { 00:42:54.646 "adrfam": "IPv4", 00:42:54.646 "traddr": "10.0.0.2", 00:42:54.646 "trsvcid": "4420", 00:42:54.646 "trtype": "TCP" 00:42:54.646 }, 00:42:54.646 "peer_address": { 00:42:54.646 "adrfam": "IPv4", 00:42:54.646 "traddr": "10.0.0.1", 00:42:54.646 "trsvcid": "33164", 00:42:54.646 "trtype": "TCP" 00:42:54.646 }, 00:42:54.646 "qid": 0, 00:42:54.646 "state": "enabled" 00:42:54.646 } 00:42:54.646 ]' 00:42:54.646 14:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:42:54.906 14:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:42:54.906 14:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:42:54.906 14:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:42:54.906 14:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:42:54.906 14:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:54.906 14:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:54.906 14:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:55.166 14:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:03:MDc3MDZlZDdlNzJhZWQyM2RkODI2YzU4NjM4NGJmM2MyOWMzN2ViMGNlMDQzMTUyMzc0MTQyMGJjY2UwMWJhMPYII3I=: 00:42:55.736 14:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:55.736 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:55.736 14:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:42:55.736 14:56:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:55.736 14:56:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:55.736 14:56:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:55.736 14:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:42:55.736 14:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:42:55.736 14:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:42:55.736 14:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:42:55.736 14:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:42:55.736 14:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:42:55.736 14:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:42:55.736 14:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:42:55.736 14:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:42:55.736 14:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:55.736 14:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:55.736 14:56:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:55.736 14:56:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:55.736 14:56:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:55.736 14:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:55.736 14:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:42:56.312 00:42:56.312 14:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:42:56.312 14:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:42:56.312 14:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:56.584 14:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:56.584 14:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:56.584 14:56:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:56.584 14:56:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:56.584 14:56:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:56.584 14:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:42:56.584 { 00:42:56.584 "auth": { 00:42:56.584 "dhgroup": "ffdhe6144", 00:42:56.584 "digest": "sha384", 00:42:56.584 "state": "completed" 00:42:56.584 }, 00:42:56.584 "cntlid": 81, 00:42:56.584 "listen_address": { 00:42:56.584 "adrfam": "IPv4", 00:42:56.584 "traddr": "10.0.0.2", 00:42:56.584 "trsvcid": "4420", 00:42:56.584 "trtype": "TCP" 00:42:56.584 }, 00:42:56.584 "peer_address": { 00:42:56.584 "adrfam": "IPv4", 00:42:56.584 "traddr": "10.0.0.1", 00:42:56.584 "trsvcid": "33182", 00:42:56.584 "trtype": "TCP" 00:42:56.584 }, 00:42:56.584 "qid": 0, 00:42:56.584 "state": "enabled" 00:42:56.584 } 00:42:56.584 ]' 00:42:56.584 14:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:42:56.584 14:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:42:56.584 14:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:42:56.584 14:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:42:56.584 14:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:42:56.584 14:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:56.584 14:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:56.584 14:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:56.845 14:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:00:OTcxYzUzZGZkNGQyYWRkYmVmNTEyNDU4MDE3ZmUyN2ZkMDg2MDg5YThkYjAzNzVmDZK9aA==: --dhchap-ctrl-secret DHHC-1:03:ZmMyYTkzYTQ2NjEzZGJhMDI3ODYxYTFhYzM4YWI2MDUwZmUzOGRjNThlNWY3MTJhOTIyY2M2Y2YyNzE0ZDk3N2PNUpc=: 00:42:57.414 14:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:57.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:57.414 14:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:42:57.414 14:56:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:57.414 14:56:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:57.414 14:56:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:57.414 14:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:42:57.414 14:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:42:57.414 14:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:42:57.674 14:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:42:57.674 14:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:42:57.674 14:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:42:57.674 14:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:42:57.674 14:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:42:57.674 14:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:57.674 14:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:57.674 14:56:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:57.674 14:56:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:57.674 14:56:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:57.674 14:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:57.674 14:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:42:57.933 00:42:57.933 14:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:42:57.933 14:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:57.933 14:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:42:58.192 14:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:42:58.192 14:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:42:58.192 14:56:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:58.192 14:56:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:58.192 14:56:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:58.192 14:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:42:58.192 { 00:42:58.192 "auth": { 00:42:58.192 "dhgroup": "ffdhe6144", 00:42:58.192 "digest": "sha384", 00:42:58.192 "state": "completed" 00:42:58.192 }, 00:42:58.192 "cntlid": 83, 00:42:58.192 "listen_address": { 00:42:58.192 "adrfam": "IPv4", 00:42:58.192 "traddr": "10.0.0.2", 00:42:58.192 "trsvcid": "4420", 00:42:58.192 "trtype": "TCP" 00:42:58.192 }, 00:42:58.192 "peer_address": { 00:42:58.192 "adrfam": "IPv4", 00:42:58.192 "traddr": "10.0.0.1", 00:42:58.192 "trsvcid": "33214", 00:42:58.192 "trtype": "TCP" 00:42:58.192 }, 00:42:58.192 "qid": 0, 00:42:58.192 "state": "enabled" 00:42:58.192 } 00:42:58.192 ]' 00:42:58.192 14:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:42:58.192 14:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:42:58.192 14:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:42:58.452 14:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:42:58.452 14:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:42:58.452 14:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:42:58.452 14:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:42:58.452 14:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:42:58.711 14:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:01:ZTNjYjU0NDg1MDk3NTViZDMxYzJhOTIxMDQxYTVlMjSqZTCF: --dhchap-ctrl-secret DHHC-1:02:YWMzZjc2MTU0NzAwZWYwMmVmMmZhMzRhYmM0NDhiZTcxMGVjNTZmMmQ1NmIxMzU53N4GZA==: 00:42:59.280 14:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:42:59.280 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:42:59.280 14:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:42:59.280 14:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:59.280 14:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:59.280 14:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:59.280 14:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:42:59.280 14:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:42:59.280 14:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:42:59.280 14:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:42:59.280 14:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:42:59.280 14:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:42:59.280 14:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:42:59.280 14:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:42:59.280 14:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:42:59.280 14:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:59.280 14:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:59.280 14:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:42:59.280 14:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:59.280 14:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:59.280 14:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:42:59.849 00:42:59.849 14:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:42:59.849 14:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:42:59.849 14:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:43:00.109 14:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:00.109 14:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:00.109 14:56:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:00.109 14:56:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:00.109 14:56:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:00.109 14:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:43:00.109 { 00:43:00.109 "auth": { 00:43:00.109 "dhgroup": "ffdhe6144", 00:43:00.109 "digest": "sha384", 00:43:00.109 "state": "completed" 00:43:00.109 }, 00:43:00.109 "cntlid": 85, 00:43:00.109 "listen_address": { 00:43:00.109 "adrfam": "IPv4", 00:43:00.109 "traddr": "10.0.0.2", 00:43:00.109 "trsvcid": "4420", 00:43:00.109 "trtype": "TCP" 00:43:00.109 }, 00:43:00.109 "peer_address": { 00:43:00.109 "adrfam": "IPv4", 00:43:00.109 "traddr": "10.0.0.1", 00:43:00.109 "trsvcid": "33242", 00:43:00.109 "trtype": "TCP" 00:43:00.109 }, 00:43:00.109 "qid": 0, 00:43:00.109 "state": "enabled" 00:43:00.109 } 00:43:00.109 ]' 00:43:00.109 14:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:43:00.109 14:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:43:00.109 14:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:43:00.109 14:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:43:00.109 14:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:43:00.109 14:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:00.109 14:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:00.109 14:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:00.378 14:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:02:NjQ4YzA4NGQ4YjdlMTFjN2Q1MzIzNjgwZjVlZmMwMzhmYzEwNDZlYTM1MmFlMGNhRA/YTw==: --dhchap-ctrl-secret DHHC-1:01:NGU5MTI2ZDA0NGJmMGQ4YTlhYjViYTc5NWEwMzBmYjT3AH/i: 00:43:00.962 14:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:00.962 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:00.962 14:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:43:00.962 14:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:00.962 14:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:00.962 14:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:00.962 14:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:43:00.962 14:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:43:00.962 14:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:43:01.221 14:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:43:01.221 14:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:43:01.221 14:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:43:01.221 14:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:43:01.221 14:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:43:01.221 14:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:01.221 14:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key3 00:43:01.221 14:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:01.221 14:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:01.221 14:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:01.221 14:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:43:01.221 14:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:43:01.481 00:43:01.481 14:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:43:01.481 14:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:01.481 14:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:43:01.741 14:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:01.741 14:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:01.741 14:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:01.741 14:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:01.741 14:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:01.741 14:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:43:01.741 { 00:43:01.741 "auth": { 00:43:01.741 "dhgroup": "ffdhe6144", 00:43:01.741 "digest": "sha384", 00:43:01.741 "state": "completed" 00:43:01.741 }, 00:43:01.741 "cntlid": 87, 00:43:01.741 "listen_address": { 00:43:01.741 "adrfam": "IPv4", 00:43:01.741 "traddr": "10.0.0.2", 00:43:01.741 "trsvcid": "4420", 00:43:01.741 "trtype": "TCP" 00:43:01.741 }, 00:43:01.741 "peer_address": { 00:43:01.741 "adrfam": "IPv4", 00:43:01.741 "traddr": "10.0.0.1", 00:43:01.741 "trsvcid": "53458", 00:43:01.741 "trtype": "TCP" 00:43:01.741 }, 00:43:01.741 "qid": 0, 00:43:01.741 "state": "enabled" 00:43:01.741 } 00:43:01.741 ]' 00:43:01.741 14:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:43:01.741 14:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:43:01.741 14:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:43:01.741 14:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:43:01.741 14:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:43:02.001 14:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:02.001 14:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:02.001 14:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:02.001 14:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:03:MDc3MDZlZDdlNzJhZWQyM2RkODI2YzU4NjM4NGJmM2MyOWMzN2ViMGNlMDQzMTUyMzc0MTQyMGJjY2UwMWJhMPYII3I=: 00:43:02.938 14:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:02.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:02.938 14:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:43:02.938 14:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:02.938 14:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:02.938 14:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:02.938 14:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:43:02.938 14:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:43:02.938 14:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:43:02.938 14:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:43:02.938 14:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:43:02.938 14:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:43:02.938 14:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:43:02.938 14:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:43:02.938 14:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:43:02.938 14:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:02.938 14:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:02.938 14:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:02.938 14:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:02.938 14:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:02.938 14:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:02.938 14:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:03.506 00:43:03.506 14:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:43:03.506 14:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:43:03.506 14:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:03.764 14:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:03.764 14:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:03.764 14:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:03.764 14:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:03.764 14:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:03.764 14:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:43:03.764 { 00:43:03.764 "auth": { 00:43:03.764 "dhgroup": "ffdhe8192", 00:43:03.764 "digest": "sha384", 00:43:03.764 "state": "completed" 00:43:03.764 }, 00:43:03.764 "cntlid": 89, 00:43:03.764 "listen_address": { 00:43:03.764 "adrfam": "IPv4", 00:43:03.764 "traddr": "10.0.0.2", 00:43:03.764 "trsvcid": "4420", 00:43:03.764 "trtype": "TCP" 00:43:03.764 }, 00:43:03.764 "peer_address": { 00:43:03.764 "adrfam": "IPv4", 00:43:03.764 "traddr": "10.0.0.1", 00:43:03.764 "trsvcid": "53478", 00:43:03.764 "trtype": "TCP" 00:43:03.764 }, 00:43:03.764 "qid": 0, 00:43:03.764 "state": "enabled" 00:43:03.764 } 00:43:03.764 ]' 00:43:03.764 14:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:43:03.764 14:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:43:03.764 14:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:43:03.764 14:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:43:03.764 14:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:43:03.764 14:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:03.764 14:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:03.764 14:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:04.035 14:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:00:OTcxYzUzZGZkNGQyYWRkYmVmNTEyNDU4MDE3ZmUyN2ZkMDg2MDg5YThkYjAzNzVmDZK9aA==: --dhchap-ctrl-secret DHHC-1:03:ZmMyYTkzYTQ2NjEzZGJhMDI3ODYxYTFhYzM4YWI2MDUwZmUzOGRjNThlNWY3MTJhOTIyY2M2Y2YyNzE0ZDk3N2PNUpc=: 00:43:04.618 14:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:04.618 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:04.618 14:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:43:04.618 14:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:04.618 14:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:04.618 14:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:04.618 14:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:43:04.618 14:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:43:04.618 14:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:43:04.876 14:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:43:04.876 14:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:43:04.876 14:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:43:04.876 14:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:43:04.876 14:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:43:04.876 14:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:04.876 14:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:04.876 14:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:04.876 14:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:04.876 14:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:04.876 14:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:04.876 14:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:05.444 00:43:05.444 14:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:43:05.444 14:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:43:05.444 14:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:05.705 14:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:05.705 14:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:05.705 14:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:05.705 14:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:05.705 14:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:05.705 14:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:43:05.705 { 00:43:05.705 "auth": { 00:43:05.705 "dhgroup": "ffdhe8192", 00:43:05.705 "digest": "sha384", 00:43:05.705 "state": "completed" 00:43:05.705 }, 00:43:05.705 "cntlid": 91, 00:43:05.705 "listen_address": { 00:43:05.705 "adrfam": "IPv4", 00:43:05.705 "traddr": "10.0.0.2", 00:43:05.705 "trsvcid": "4420", 00:43:05.705 "trtype": "TCP" 00:43:05.705 }, 00:43:05.705 "peer_address": { 00:43:05.705 "adrfam": "IPv4", 00:43:05.705 "traddr": "10.0.0.1", 00:43:05.705 "trsvcid": "53502", 00:43:05.705 "trtype": "TCP" 00:43:05.705 }, 00:43:05.705 "qid": 0, 00:43:05.705 "state": "enabled" 00:43:05.705 } 00:43:05.705 ]' 00:43:05.705 14:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:43:05.705 14:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:43:05.705 14:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:43:05.705 14:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:43:05.705 14:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:43:05.705 14:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:05.705 14:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:05.705 14:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:05.964 14:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:01:ZTNjYjU0NDg1MDk3NTViZDMxYzJhOTIxMDQxYTVlMjSqZTCF: --dhchap-ctrl-secret DHHC-1:02:YWMzZjc2MTU0NzAwZWYwMmVmMmZhMzRhYmM0NDhiZTcxMGVjNTZmMmQ1NmIxMzU53N4GZA==: 00:43:06.532 14:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:06.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:06.532 14:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:43:06.532 14:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:06.533 14:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:06.533 14:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:06.533 14:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:43:06.533 14:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:43:06.533 14:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:43:06.791 14:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:43:06.791 14:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:43:06.791 14:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:43:06.791 14:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:43:06.791 14:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:43:06.791 14:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:06.791 14:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:06.791 14:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:06.791 14:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:06.791 14:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:06.792 14:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:06.792 14:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:07.359 00:43:07.359 14:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:43:07.359 14:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:07.359 14:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:43:07.618 14:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:07.618 14:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:07.618 14:56:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:07.618 14:56:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:07.618 14:56:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:07.618 14:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:43:07.618 { 00:43:07.618 "auth": { 00:43:07.618 "dhgroup": "ffdhe8192", 00:43:07.618 "digest": "sha384", 00:43:07.618 "state": "completed" 00:43:07.618 }, 00:43:07.618 "cntlid": 93, 00:43:07.618 "listen_address": { 00:43:07.618 "adrfam": "IPv4", 00:43:07.618 "traddr": "10.0.0.2", 00:43:07.618 "trsvcid": "4420", 00:43:07.618 "trtype": "TCP" 00:43:07.618 }, 00:43:07.618 "peer_address": { 00:43:07.618 "adrfam": "IPv4", 00:43:07.618 "traddr": "10.0.0.1", 00:43:07.618 "trsvcid": "53526", 00:43:07.618 "trtype": "TCP" 00:43:07.618 }, 00:43:07.618 "qid": 0, 00:43:07.618 "state": "enabled" 00:43:07.618 } 00:43:07.618 ]' 00:43:07.618 14:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:43:07.618 14:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:43:07.618 14:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:43:07.618 14:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:43:07.618 14:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:43:07.618 14:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:07.618 14:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:07.618 14:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:07.905 14:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:02:NjQ4YzA4NGQ4YjdlMTFjN2Q1MzIzNjgwZjVlZmMwMzhmYzEwNDZlYTM1MmFlMGNhRA/YTw==: --dhchap-ctrl-secret DHHC-1:01:NGU5MTI2ZDA0NGJmMGQ4YTlhYjViYTc5NWEwMzBmYjT3AH/i: 00:43:08.473 14:56:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:08.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:08.473 14:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:43:08.473 14:56:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:08.473 14:56:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:08.473 14:56:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:08.473 14:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:43:08.473 14:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:43:08.473 14:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:43:08.732 14:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:43:08.732 14:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:43:08.732 14:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:43:08.732 14:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:43:08.732 14:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:43:08.732 14:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:08.732 14:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key3 00:43:08.732 14:56:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:08.732 14:56:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:08.732 14:56:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:08.732 14:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:43:08.732 14:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:43:09.299 00:43:09.299 14:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:43:09.299 14:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:43:09.299 14:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:09.559 14:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:09.559 14:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:09.559 14:56:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:09.559 14:56:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:09.559 14:56:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:09.559 14:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:43:09.559 { 00:43:09.559 "auth": { 00:43:09.559 "dhgroup": "ffdhe8192", 00:43:09.559 "digest": "sha384", 00:43:09.559 "state": "completed" 00:43:09.559 }, 00:43:09.559 "cntlid": 95, 00:43:09.559 "listen_address": { 00:43:09.559 "adrfam": "IPv4", 00:43:09.559 "traddr": "10.0.0.2", 00:43:09.559 "trsvcid": "4420", 00:43:09.559 "trtype": "TCP" 00:43:09.559 }, 00:43:09.559 "peer_address": { 00:43:09.559 "adrfam": "IPv4", 00:43:09.559 "traddr": "10.0.0.1", 00:43:09.559 "trsvcid": "53550", 00:43:09.559 "trtype": "TCP" 00:43:09.559 }, 00:43:09.559 "qid": 0, 00:43:09.559 "state": "enabled" 00:43:09.559 } 00:43:09.559 ]' 00:43:09.559 14:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:43:09.559 14:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:43:09.559 14:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:43:09.559 14:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:43:09.559 14:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:43:09.559 14:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:09.559 14:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:09.559 14:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:09.819 14:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:03:MDc3MDZlZDdlNzJhZWQyM2RkODI2YzU4NjM4NGJmM2MyOWMzN2ViMGNlMDQzMTUyMzc0MTQyMGJjY2UwMWJhMPYII3I=: 00:43:10.388 14:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:10.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:10.388 14:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:43:10.388 14:56:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:10.388 14:56:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:10.388 14:56:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:10.388 14:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:43:10.388 14:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:43:10.388 14:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:43:10.388 14:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:43:10.388 14:56:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:43:10.648 14:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:43:10.648 14:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:43:10.648 14:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:43:10.648 14:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:43:10.648 14:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:43:10.648 14:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:10.648 14:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:10.648 14:56:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:10.648 14:56:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:10.648 14:56:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:10.648 14:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:10.648 14:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:10.907 00:43:10.907 14:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:43:10.907 14:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:43:10.907 14:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:10.907 14:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:10.907 14:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:10.907 14:56:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:10.907 14:56:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:10.907 14:56:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:10.907 14:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:43:10.907 { 00:43:10.907 "auth": { 00:43:10.907 "dhgroup": "null", 00:43:10.907 "digest": "sha512", 00:43:10.907 "state": "completed" 00:43:10.907 }, 00:43:10.907 "cntlid": 97, 00:43:10.907 "listen_address": { 00:43:10.907 "adrfam": "IPv4", 00:43:10.907 "traddr": "10.0.0.2", 00:43:10.907 "trsvcid": "4420", 00:43:10.907 "trtype": "TCP" 00:43:10.907 }, 00:43:10.907 "peer_address": { 00:43:10.907 "adrfam": "IPv4", 00:43:10.907 "traddr": "10.0.0.1", 00:43:10.907 "trsvcid": "53572", 00:43:10.907 "trtype": "TCP" 00:43:10.907 }, 00:43:10.907 "qid": 0, 00:43:10.907 "state": "enabled" 00:43:10.907 } 00:43:10.907 ]' 00:43:10.907 14:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:43:11.166 14:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:43:11.166 14:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:43:11.166 14:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:43:11.166 14:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:43:11.166 14:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:11.166 14:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:11.166 14:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:11.426 14:56:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:00:OTcxYzUzZGZkNGQyYWRkYmVmNTEyNDU4MDE3ZmUyN2ZkMDg2MDg5YThkYjAzNzVmDZK9aA==: --dhchap-ctrl-secret DHHC-1:03:ZmMyYTkzYTQ2NjEzZGJhMDI3ODYxYTFhYzM4YWI2MDUwZmUzOGRjNThlNWY3MTJhOTIyY2M2Y2YyNzE0ZDk3N2PNUpc=: 00:43:11.995 14:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:11.995 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:11.995 14:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:43:11.995 14:56:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:11.995 14:56:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:11.995 14:56:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:11.995 14:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:43:11.995 14:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:43:11.995 14:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:43:11.995 14:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:43:11.995 14:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:43:11.995 14:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:43:11.995 14:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:43:11.995 14:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:43:11.995 14:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:11.995 14:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:11.995 14:56:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:11.995 14:56:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:12.255 14:56:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:12.255 14:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:12.255 14:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:12.255 00:43:12.563 14:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:43:12.563 14:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:43:12.563 14:56:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:12.563 14:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:12.563 14:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:12.563 14:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:12.563 14:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:12.563 14:56:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:12.563 14:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:43:12.563 { 00:43:12.563 "auth": { 00:43:12.563 "dhgroup": "null", 00:43:12.563 "digest": "sha512", 00:43:12.563 "state": "completed" 00:43:12.563 }, 00:43:12.563 "cntlid": 99, 00:43:12.563 "listen_address": { 00:43:12.563 "adrfam": "IPv4", 00:43:12.563 "traddr": "10.0.0.2", 00:43:12.563 "trsvcid": "4420", 00:43:12.563 "trtype": "TCP" 00:43:12.563 }, 00:43:12.563 "peer_address": { 00:43:12.563 "adrfam": "IPv4", 00:43:12.563 "traddr": "10.0.0.1", 00:43:12.563 "trsvcid": "47904", 00:43:12.563 "trtype": "TCP" 00:43:12.563 }, 00:43:12.563 "qid": 0, 00:43:12.563 "state": "enabled" 00:43:12.563 } 00:43:12.563 ]' 00:43:12.563 14:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:43:12.823 14:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:43:12.823 14:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:43:12.823 14:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:43:12.823 14:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:43:12.823 14:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:12.823 14:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:12.823 14:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:13.082 14:56:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:01:ZTNjYjU0NDg1MDk3NTViZDMxYzJhOTIxMDQxYTVlMjSqZTCF: --dhchap-ctrl-secret DHHC-1:02:YWMzZjc2MTU0NzAwZWYwMmVmMmZhMzRhYmM0NDhiZTcxMGVjNTZmMmQ1NmIxMzU53N4GZA==: 00:43:13.650 14:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:13.650 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:13.650 14:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:43:13.650 14:56:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:13.650 14:56:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:13.650 14:56:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:13.650 14:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:43:13.650 14:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:43:13.650 14:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:43:13.650 14:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:43:13.650 14:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:43:13.650 14:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:43:13.651 14:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:43:13.651 14:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:43:13.651 14:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:13.651 14:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:13.651 14:56:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:13.651 14:56:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:13.651 14:56:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:13.651 14:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:13.651 14:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:13.909 00:43:13.909 14:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:43:13.909 14:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:43:13.909 14:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:14.168 14:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:14.168 14:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:14.168 14:56:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:14.168 14:56:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:14.168 14:56:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:14.168 14:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:43:14.169 { 00:43:14.169 "auth": { 00:43:14.169 "dhgroup": "null", 00:43:14.169 "digest": "sha512", 00:43:14.169 "state": "completed" 00:43:14.169 }, 00:43:14.169 "cntlid": 101, 00:43:14.169 "listen_address": { 00:43:14.169 "adrfam": "IPv4", 00:43:14.169 "traddr": "10.0.0.2", 00:43:14.169 "trsvcid": "4420", 00:43:14.169 "trtype": "TCP" 00:43:14.169 }, 00:43:14.169 "peer_address": { 00:43:14.169 "adrfam": "IPv4", 00:43:14.169 "traddr": "10.0.0.1", 00:43:14.169 "trsvcid": "47944", 00:43:14.169 "trtype": "TCP" 00:43:14.169 }, 00:43:14.169 "qid": 0, 00:43:14.169 "state": "enabled" 00:43:14.169 } 00:43:14.169 ]' 00:43:14.169 14:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:43:14.169 14:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:43:14.169 14:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:43:14.428 14:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:43:14.428 14:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:43:14.428 14:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:14.428 14:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:14.428 14:56:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:14.428 14:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:02:NjQ4YzA4NGQ4YjdlMTFjN2Q1MzIzNjgwZjVlZmMwMzhmYzEwNDZlYTM1MmFlMGNhRA/YTw==: --dhchap-ctrl-secret DHHC-1:01:NGU5MTI2ZDA0NGJmMGQ4YTlhYjViYTc5NWEwMzBmYjT3AH/i: 00:43:14.995 14:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:14.995 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:14.995 14:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:43:14.995 14:56:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:14.995 14:56:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:14.995 14:56:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:14.995 14:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:43:14.995 14:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:43:14.995 14:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:43:15.255 14:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:43:15.255 14:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:43:15.255 14:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:43:15.255 14:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:43:15.255 14:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:43:15.255 14:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:15.255 14:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key3 00:43:15.255 14:56:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:15.255 14:56:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:15.255 14:56:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:15.255 14:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:43:15.255 14:56:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:43:15.515 00:43:15.515 14:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:43:15.515 14:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:43:15.515 14:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:15.775 14:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:15.775 14:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:15.775 14:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:15.775 14:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:15.775 14:56:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:15.775 14:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:43:15.775 { 00:43:15.775 "auth": { 00:43:15.775 "dhgroup": "null", 00:43:15.775 "digest": "sha512", 00:43:15.775 "state": "completed" 00:43:15.775 }, 00:43:15.775 "cntlid": 103, 00:43:15.775 "listen_address": { 00:43:15.775 "adrfam": "IPv4", 00:43:15.775 "traddr": "10.0.0.2", 00:43:15.775 "trsvcid": "4420", 00:43:15.775 "trtype": "TCP" 00:43:15.775 }, 00:43:15.775 "peer_address": { 00:43:15.775 "adrfam": "IPv4", 00:43:15.775 "traddr": "10.0.0.1", 00:43:15.775 "trsvcid": "47962", 00:43:15.775 "trtype": "TCP" 00:43:15.775 }, 00:43:15.775 "qid": 0, 00:43:15.775 "state": "enabled" 00:43:15.775 } 00:43:15.775 ]' 00:43:15.775 14:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:43:15.775 14:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:43:15.775 14:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:43:15.775 14:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:43:15.775 14:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:43:16.035 14:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:16.035 14:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:16.035 14:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:16.035 14:56:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:03:MDc3MDZlZDdlNzJhZWQyM2RkODI2YzU4NjM4NGJmM2MyOWMzN2ViMGNlMDQzMTUyMzc0MTQyMGJjY2UwMWJhMPYII3I=: 00:43:16.972 14:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:16.972 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:16.972 14:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:43:16.972 14:56:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:16.972 14:56:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:16.972 14:56:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:16.972 14:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:43:16.972 14:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:43:16.972 14:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:43:16.972 14:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:43:16.972 14:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:43:16.972 14:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:43:16.972 14:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:43:16.972 14:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:43:16.972 14:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:43:16.972 14:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:16.972 14:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:16.972 14:56:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:16.972 14:56:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:16.972 14:56:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:16.972 14:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:16.972 14:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:17.231 00:43:17.231 14:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:43:17.231 14:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:43:17.231 14:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:17.489 14:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:17.489 14:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:17.489 14:56:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:17.489 14:56:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:17.489 14:56:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:17.489 14:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:43:17.489 { 00:43:17.489 "auth": { 00:43:17.489 "dhgroup": "ffdhe2048", 00:43:17.489 "digest": "sha512", 00:43:17.489 "state": "completed" 00:43:17.489 }, 00:43:17.489 "cntlid": 105, 00:43:17.489 "listen_address": { 00:43:17.489 "adrfam": "IPv4", 00:43:17.489 "traddr": "10.0.0.2", 00:43:17.489 "trsvcid": "4420", 00:43:17.490 "trtype": "TCP" 00:43:17.490 }, 00:43:17.490 "peer_address": { 00:43:17.490 "adrfam": "IPv4", 00:43:17.490 "traddr": "10.0.0.1", 00:43:17.490 "trsvcid": "47990", 00:43:17.490 "trtype": "TCP" 00:43:17.490 }, 00:43:17.490 "qid": 0, 00:43:17.490 "state": "enabled" 00:43:17.490 } 00:43:17.490 ]' 00:43:17.490 14:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:43:17.490 14:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:43:17.490 14:56:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:43:17.490 14:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:43:17.490 14:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:43:17.490 14:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:17.490 14:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:17.490 14:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:17.749 14:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:00:OTcxYzUzZGZkNGQyYWRkYmVmNTEyNDU4MDE3ZmUyN2ZkMDg2MDg5YThkYjAzNzVmDZK9aA==: --dhchap-ctrl-secret DHHC-1:03:ZmMyYTkzYTQ2NjEzZGJhMDI3ODYxYTFhYzM4YWI2MDUwZmUzOGRjNThlNWY3MTJhOTIyY2M2Y2YyNzE0ZDk3N2PNUpc=: 00:43:18.318 14:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:18.318 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:18.318 14:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:43:18.318 14:56:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:18.318 14:56:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:18.318 14:56:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:18.318 14:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:43:18.318 14:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:43:18.318 14:56:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:43:18.578 14:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:43:18.578 14:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:43:18.578 14:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:43:18.578 14:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:43:18.578 14:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:43:18.578 14:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:18.578 14:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:18.578 14:56:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:18.578 14:56:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:18.578 14:56:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:18.578 14:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:18.578 14:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:18.836 00:43:18.836 14:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:43:18.836 14:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:43:18.836 14:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:19.095 14:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:19.095 14:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:19.095 14:56:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:19.095 14:56:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:19.095 14:56:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:19.095 14:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:43:19.095 { 00:43:19.095 "auth": { 00:43:19.095 "dhgroup": "ffdhe2048", 00:43:19.095 "digest": "sha512", 00:43:19.095 "state": "completed" 00:43:19.095 }, 00:43:19.095 "cntlid": 107, 00:43:19.095 "listen_address": { 00:43:19.095 "adrfam": "IPv4", 00:43:19.095 "traddr": "10.0.0.2", 00:43:19.095 "trsvcid": "4420", 00:43:19.095 "trtype": "TCP" 00:43:19.095 }, 00:43:19.096 "peer_address": { 00:43:19.096 "adrfam": "IPv4", 00:43:19.096 "traddr": "10.0.0.1", 00:43:19.096 "trsvcid": "48034", 00:43:19.096 "trtype": "TCP" 00:43:19.096 }, 00:43:19.096 "qid": 0, 00:43:19.096 "state": "enabled" 00:43:19.096 } 00:43:19.096 ]' 00:43:19.096 14:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:43:19.096 14:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:43:19.096 14:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:43:19.096 14:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:43:19.096 14:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:43:19.096 14:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:19.096 14:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:19.096 14:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:19.355 14:56:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:01:ZTNjYjU0NDg1MDk3NTViZDMxYzJhOTIxMDQxYTVlMjSqZTCF: --dhchap-ctrl-secret DHHC-1:02:YWMzZjc2MTU0NzAwZWYwMmVmMmZhMzRhYmM0NDhiZTcxMGVjNTZmMmQ1NmIxMzU53N4GZA==: 00:43:19.923 14:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:19.923 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:19.923 14:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:43:19.923 14:56:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:19.923 14:56:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:19.923 14:56:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:19.923 14:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:43:19.923 14:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:43:19.923 14:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:43:20.183 14:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:43:20.183 14:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:43:20.183 14:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:43:20.183 14:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:43:20.183 14:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:43:20.183 14:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:20.183 14:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:20.183 14:56:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:20.183 14:56:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:20.183 14:56:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:20.183 14:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:20.183 14:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:20.444 00:43:20.444 14:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:43:20.444 14:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:43:20.444 14:56:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:20.704 14:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:20.704 14:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:20.704 14:56:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:20.704 14:56:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:20.704 14:56:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:20.704 14:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:43:20.704 { 00:43:20.704 "auth": { 00:43:20.704 "dhgroup": "ffdhe2048", 00:43:20.704 "digest": "sha512", 00:43:20.704 "state": "completed" 00:43:20.704 }, 00:43:20.704 "cntlid": 109, 00:43:20.704 "listen_address": { 00:43:20.704 "adrfam": "IPv4", 00:43:20.704 "traddr": "10.0.0.2", 00:43:20.704 "trsvcid": "4420", 00:43:20.704 "trtype": "TCP" 00:43:20.704 }, 00:43:20.704 "peer_address": { 00:43:20.704 "adrfam": "IPv4", 00:43:20.704 "traddr": "10.0.0.1", 00:43:20.704 "trsvcid": "48056", 00:43:20.704 "trtype": "TCP" 00:43:20.704 }, 00:43:20.704 "qid": 0, 00:43:20.704 "state": "enabled" 00:43:20.704 } 00:43:20.704 ]' 00:43:20.704 14:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:43:20.704 14:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:43:20.704 14:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:43:20.704 14:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:43:20.704 14:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:43:20.704 14:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:20.704 14:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:20.704 14:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:20.964 14:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:02:NjQ4YzA4NGQ4YjdlMTFjN2Q1MzIzNjgwZjVlZmMwMzhmYzEwNDZlYTM1MmFlMGNhRA/YTw==: --dhchap-ctrl-secret DHHC-1:01:NGU5MTI2ZDA0NGJmMGQ4YTlhYjViYTc5NWEwMzBmYjT3AH/i: 00:43:21.533 14:56:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:21.533 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:21.533 14:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:43:21.533 14:56:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:21.533 14:56:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:21.533 14:56:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:21.533 14:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:43:21.533 14:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:43:21.533 14:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:43:21.793 14:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:43:21.793 14:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:43:21.793 14:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:43:21.793 14:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:43:21.793 14:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:43:21.793 14:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:21.793 14:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key3 00:43:21.793 14:56:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:21.793 14:56:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:21.793 14:56:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:21.793 14:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:43:21.794 14:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:43:22.053 00:43:22.053 14:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:43:22.053 14:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:43:22.053 14:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:22.313 14:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:22.313 14:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:22.313 14:56:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:22.313 14:56:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:22.313 14:56:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:22.313 14:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:43:22.313 { 00:43:22.313 "auth": { 00:43:22.313 "dhgroup": "ffdhe2048", 00:43:22.313 "digest": "sha512", 00:43:22.313 "state": "completed" 00:43:22.313 }, 00:43:22.313 "cntlid": 111, 00:43:22.313 "listen_address": { 00:43:22.313 "adrfam": "IPv4", 00:43:22.313 "traddr": "10.0.0.2", 00:43:22.313 "trsvcid": "4420", 00:43:22.313 "trtype": "TCP" 00:43:22.313 }, 00:43:22.313 "peer_address": { 00:43:22.313 "adrfam": "IPv4", 00:43:22.313 "traddr": "10.0.0.1", 00:43:22.313 "trsvcid": "35430", 00:43:22.313 "trtype": "TCP" 00:43:22.313 }, 00:43:22.313 "qid": 0, 00:43:22.313 "state": "enabled" 00:43:22.313 } 00:43:22.313 ]' 00:43:22.313 14:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:43:22.313 14:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:43:22.313 14:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:43:22.313 14:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:43:22.313 14:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:43:22.313 14:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:22.313 14:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:22.313 14:56:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:22.573 14:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:03:MDc3MDZlZDdlNzJhZWQyM2RkODI2YzU4NjM4NGJmM2MyOWMzN2ViMGNlMDQzMTUyMzc0MTQyMGJjY2UwMWJhMPYII3I=: 00:43:23.143 14:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:23.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:23.143 14:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:43:23.143 14:56:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:23.143 14:56:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:23.143 14:56:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:23.143 14:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:43:23.143 14:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:43:23.143 14:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:43:23.143 14:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:43:23.403 14:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:43:23.403 14:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:43:23.403 14:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:43:23.403 14:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:43:23.403 14:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:43:23.403 14:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:23.403 14:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:23.403 14:56:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:23.403 14:56:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:23.403 14:56:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:23.403 14:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:23.403 14:56:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:23.662 00:43:23.662 14:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:43:23.662 14:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:23.662 14:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:43:23.922 14:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:23.922 14:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:23.922 14:56:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:23.922 14:56:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:23.922 14:56:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:23.922 14:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:43:23.922 { 00:43:23.922 "auth": { 00:43:23.922 "dhgroup": "ffdhe3072", 00:43:23.922 "digest": "sha512", 00:43:23.922 "state": "completed" 00:43:23.922 }, 00:43:23.922 "cntlid": 113, 00:43:23.922 "listen_address": { 00:43:23.922 "adrfam": "IPv4", 00:43:23.922 "traddr": "10.0.0.2", 00:43:23.922 "trsvcid": "4420", 00:43:23.922 "trtype": "TCP" 00:43:23.922 }, 00:43:23.922 "peer_address": { 00:43:23.922 "adrfam": "IPv4", 00:43:23.922 "traddr": "10.0.0.1", 00:43:23.922 "trsvcid": "35444", 00:43:23.922 "trtype": "TCP" 00:43:23.922 }, 00:43:23.922 "qid": 0, 00:43:23.922 "state": "enabled" 00:43:23.922 } 00:43:23.922 ]' 00:43:23.922 14:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:43:23.922 14:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:43:23.922 14:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:43:23.922 14:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:43:23.922 14:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:43:23.922 14:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:23.922 14:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:23.922 14:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:24.182 14:56:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:00:OTcxYzUzZGZkNGQyYWRkYmVmNTEyNDU4MDE3ZmUyN2ZkMDg2MDg5YThkYjAzNzVmDZK9aA==: --dhchap-ctrl-secret DHHC-1:03:ZmMyYTkzYTQ2NjEzZGJhMDI3ODYxYTFhYzM4YWI2MDUwZmUzOGRjNThlNWY3MTJhOTIyY2M2Y2YyNzE0ZDk3N2PNUpc=: 00:43:24.769 14:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:24.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:24.769 14:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:43:24.769 14:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:24.769 14:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:24.769 14:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:24.769 14:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:43:24.769 14:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:43:24.769 14:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:43:25.029 14:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:43:25.029 14:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:43:25.029 14:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:43:25.029 14:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:43:25.029 14:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:43:25.029 14:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:25.029 14:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:25.029 14:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:25.029 14:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:25.029 14:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:25.029 14:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:25.029 14:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:25.289 00:43:25.289 14:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:43:25.289 14:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:43:25.289 14:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:25.548 14:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:25.548 14:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:25.548 14:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:25.548 14:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:25.548 14:56:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:25.548 14:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:43:25.548 { 00:43:25.548 "auth": { 00:43:25.548 "dhgroup": "ffdhe3072", 00:43:25.548 "digest": "sha512", 00:43:25.548 "state": "completed" 00:43:25.548 }, 00:43:25.548 "cntlid": 115, 00:43:25.548 "listen_address": { 00:43:25.548 "adrfam": "IPv4", 00:43:25.548 "traddr": "10.0.0.2", 00:43:25.548 "trsvcid": "4420", 00:43:25.548 "trtype": "TCP" 00:43:25.548 }, 00:43:25.548 "peer_address": { 00:43:25.548 "adrfam": "IPv4", 00:43:25.548 "traddr": "10.0.0.1", 00:43:25.548 "trsvcid": "35478", 00:43:25.548 "trtype": "TCP" 00:43:25.548 }, 00:43:25.548 "qid": 0, 00:43:25.548 "state": "enabled" 00:43:25.548 } 00:43:25.548 ]' 00:43:25.548 14:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:43:25.548 14:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:43:25.548 14:56:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:43:25.548 14:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:43:25.548 14:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:43:25.548 14:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:25.548 14:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:25.548 14:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:25.809 14:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:01:ZTNjYjU0NDg1MDk3NTViZDMxYzJhOTIxMDQxYTVlMjSqZTCF: --dhchap-ctrl-secret DHHC-1:02:YWMzZjc2MTU0NzAwZWYwMmVmMmZhMzRhYmM0NDhiZTcxMGVjNTZmMmQ1NmIxMzU53N4GZA==: 00:43:26.378 14:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:26.378 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:26.378 14:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:43:26.378 14:56:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:26.378 14:56:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:26.378 14:56:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:26.378 14:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:43:26.378 14:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:43:26.378 14:56:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:43:26.637 14:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:43:26.637 14:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:43:26.637 14:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:43:26.637 14:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:43:26.638 14:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:43:26.638 14:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:26.638 14:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:26.638 14:56:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:26.638 14:56:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:26.638 14:56:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:26.638 14:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:26.638 14:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:26.897 00:43:26.897 14:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:43:26.897 14:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:43:26.897 14:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:27.157 14:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:27.157 14:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:27.157 14:56:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:27.157 14:56:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:27.157 14:56:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:27.157 14:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:43:27.157 { 00:43:27.157 "auth": { 00:43:27.157 "dhgroup": "ffdhe3072", 00:43:27.157 "digest": "sha512", 00:43:27.157 "state": "completed" 00:43:27.157 }, 00:43:27.157 "cntlid": 117, 00:43:27.157 "listen_address": { 00:43:27.157 "adrfam": "IPv4", 00:43:27.157 "traddr": "10.0.0.2", 00:43:27.157 "trsvcid": "4420", 00:43:27.157 "trtype": "TCP" 00:43:27.157 }, 00:43:27.157 "peer_address": { 00:43:27.157 "adrfam": "IPv4", 00:43:27.157 "traddr": "10.0.0.1", 00:43:27.157 "trsvcid": "35508", 00:43:27.157 "trtype": "TCP" 00:43:27.157 }, 00:43:27.157 "qid": 0, 00:43:27.157 "state": "enabled" 00:43:27.157 } 00:43:27.157 ]' 00:43:27.157 14:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:43:27.157 14:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:43:27.157 14:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:43:27.157 14:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:43:27.157 14:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:43:27.157 14:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:27.157 14:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:27.157 14:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:27.417 14:56:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:02:NjQ4YzA4NGQ4YjdlMTFjN2Q1MzIzNjgwZjVlZmMwMzhmYzEwNDZlYTM1MmFlMGNhRA/YTw==: --dhchap-ctrl-secret DHHC-1:01:NGU5MTI2ZDA0NGJmMGQ4YTlhYjViYTc5NWEwMzBmYjT3AH/i: 00:43:27.986 14:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:27.986 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:27.986 14:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:43:27.986 14:56:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:27.986 14:56:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:27.986 14:56:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:27.986 14:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:43:27.986 14:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:43:27.986 14:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:43:28.245 14:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:43:28.245 14:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:43:28.245 14:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:43:28.245 14:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:43:28.245 14:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:43:28.245 14:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:28.245 14:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key3 00:43:28.245 14:56:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:28.245 14:56:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:28.245 14:56:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:28.245 14:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:43:28.245 14:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:43:28.503 00:43:28.503 14:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:43:28.503 14:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:28.503 14:56:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:43:28.763 14:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:28.763 14:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:28.763 14:56:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:28.763 14:56:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:28.763 14:56:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:28.763 14:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:43:28.763 { 00:43:28.763 "auth": { 00:43:28.763 "dhgroup": "ffdhe3072", 00:43:28.763 "digest": "sha512", 00:43:28.763 "state": "completed" 00:43:28.763 }, 00:43:28.763 "cntlid": 119, 00:43:28.763 "listen_address": { 00:43:28.763 "adrfam": "IPv4", 00:43:28.763 "traddr": "10.0.0.2", 00:43:28.763 "trsvcid": "4420", 00:43:28.763 "trtype": "TCP" 00:43:28.763 }, 00:43:28.763 "peer_address": { 00:43:28.763 "adrfam": "IPv4", 00:43:28.763 "traddr": "10.0.0.1", 00:43:28.763 "trsvcid": "35530", 00:43:28.763 "trtype": "TCP" 00:43:28.763 }, 00:43:28.763 "qid": 0, 00:43:28.763 "state": "enabled" 00:43:28.763 } 00:43:28.763 ]' 00:43:28.763 14:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:43:28.763 14:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:43:28.763 14:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:43:28.763 14:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:43:28.763 14:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:43:28.763 14:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:28.763 14:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:28.763 14:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:29.030 14:56:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:03:MDc3MDZlZDdlNzJhZWQyM2RkODI2YzU4NjM4NGJmM2MyOWMzN2ViMGNlMDQzMTUyMzc0MTQyMGJjY2UwMWJhMPYII3I=: 00:43:29.611 14:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:29.611 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:29.611 14:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:43:29.611 14:56:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:29.611 14:56:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:29.611 14:56:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:29.611 14:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:43:29.611 14:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:43:29.611 14:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:43:29.611 14:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:43:29.871 14:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:43:29.871 14:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:43:29.871 14:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:43:29.871 14:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:43:29.871 14:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:43:29.871 14:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:29.871 14:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:29.871 14:56:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:29.871 14:56:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:29.871 14:56:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:29.871 14:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:29.871 14:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:30.131 00:43:30.131 14:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:43:30.131 14:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:43:30.131 14:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:30.390 14:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:30.390 14:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:30.390 14:56:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:30.390 14:56:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:30.390 14:56:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:30.390 14:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:43:30.390 { 00:43:30.390 "auth": { 00:43:30.390 "dhgroup": "ffdhe4096", 00:43:30.390 "digest": "sha512", 00:43:30.390 "state": "completed" 00:43:30.390 }, 00:43:30.390 "cntlid": 121, 00:43:30.390 "listen_address": { 00:43:30.390 "adrfam": "IPv4", 00:43:30.390 "traddr": "10.0.0.2", 00:43:30.390 "trsvcid": "4420", 00:43:30.390 "trtype": "TCP" 00:43:30.390 }, 00:43:30.390 "peer_address": { 00:43:30.390 "adrfam": "IPv4", 00:43:30.390 "traddr": "10.0.0.1", 00:43:30.390 "trsvcid": "35564", 00:43:30.390 "trtype": "TCP" 00:43:30.390 }, 00:43:30.390 "qid": 0, 00:43:30.390 "state": "enabled" 00:43:30.390 } 00:43:30.390 ]' 00:43:30.390 14:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:43:30.390 14:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:43:30.391 14:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:43:30.391 14:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:43:30.391 14:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:43:30.391 14:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:30.391 14:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:30.391 14:56:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:30.650 14:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:00:OTcxYzUzZGZkNGQyYWRkYmVmNTEyNDU4MDE3ZmUyN2ZkMDg2MDg5YThkYjAzNzVmDZK9aA==: --dhchap-ctrl-secret DHHC-1:03:ZmMyYTkzYTQ2NjEzZGJhMDI3ODYxYTFhYzM4YWI2MDUwZmUzOGRjNThlNWY3MTJhOTIyY2M2Y2YyNzE0ZDk3N2PNUpc=: 00:43:31.218 14:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:31.218 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:31.218 14:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:43:31.218 14:56:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:31.218 14:56:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:31.218 14:56:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:31.218 14:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:43:31.218 14:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:43:31.218 14:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:43:31.478 14:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:43:31.478 14:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:43:31.478 14:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:43:31.478 14:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:43:31.478 14:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:43:31.478 14:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:31.478 14:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:31.478 14:56:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:31.478 14:56:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:31.478 14:56:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:31.478 14:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:31.478 14:56:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:31.738 00:43:31.738 14:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:43:31.738 14:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:31.738 14:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:43:31.997 14:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:31.997 14:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:31.997 14:56:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:31.997 14:56:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:31.997 14:56:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:31.997 14:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:43:31.997 { 00:43:31.997 "auth": { 00:43:31.997 "dhgroup": "ffdhe4096", 00:43:31.997 "digest": "sha512", 00:43:31.997 "state": "completed" 00:43:31.997 }, 00:43:31.997 "cntlid": 123, 00:43:31.997 "listen_address": { 00:43:31.997 "adrfam": "IPv4", 00:43:31.997 "traddr": "10.0.0.2", 00:43:31.997 "trsvcid": "4420", 00:43:31.997 "trtype": "TCP" 00:43:31.997 }, 00:43:31.997 "peer_address": { 00:43:31.997 "adrfam": "IPv4", 00:43:31.997 "traddr": "10.0.0.1", 00:43:31.997 "trsvcid": "36400", 00:43:31.997 "trtype": "TCP" 00:43:31.997 }, 00:43:31.997 "qid": 0, 00:43:31.997 "state": "enabled" 00:43:31.997 } 00:43:31.997 ]' 00:43:31.997 14:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:43:31.997 14:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:43:31.997 14:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:43:31.997 14:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:43:31.997 14:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:43:32.256 14:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:32.256 14:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:32.256 14:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:32.256 14:56:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:01:ZTNjYjU0NDg1MDk3NTViZDMxYzJhOTIxMDQxYTVlMjSqZTCF: --dhchap-ctrl-secret DHHC-1:02:YWMzZjc2MTU0NzAwZWYwMmVmMmZhMzRhYmM0NDhiZTcxMGVjNTZmMmQ1NmIxMzU53N4GZA==: 00:43:32.824 14:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:32.824 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:32.824 14:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:43:32.824 14:56:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:32.824 14:56:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:32.824 14:56:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:32.824 14:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:43:32.824 14:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:43:32.824 14:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:43:33.083 14:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:43:33.083 14:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:43:33.083 14:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:43:33.083 14:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:43:33.083 14:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:43:33.083 14:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:33.083 14:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:33.083 14:56:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:33.083 14:56:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:33.083 14:56:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:33.083 14:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:33.083 14:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:33.353 00:43:33.353 14:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:43:33.353 14:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:33.353 14:56:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:43:33.612 14:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:33.612 14:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:33.612 14:56:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:33.612 14:56:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:33.612 14:56:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:33.612 14:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:43:33.612 { 00:43:33.612 "auth": { 00:43:33.612 "dhgroup": "ffdhe4096", 00:43:33.612 "digest": "sha512", 00:43:33.612 "state": "completed" 00:43:33.612 }, 00:43:33.612 "cntlid": 125, 00:43:33.612 "listen_address": { 00:43:33.612 "adrfam": "IPv4", 00:43:33.612 "traddr": "10.0.0.2", 00:43:33.612 "trsvcid": "4420", 00:43:33.612 "trtype": "TCP" 00:43:33.612 }, 00:43:33.612 "peer_address": { 00:43:33.612 "adrfam": "IPv4", 00:43:33.612 "traddr": "10.0.0.1", 00:43:33.612 "trsvcid": "36422", 00:43:33.612 "trtype": "TCP" 00:43:33.612 }, 00:43:33.612 "qid": 0, 00:43:33.612 "state": "enabled" 00:43:33.612 } 00:43:33.612 ]' 00:43:33.612 14:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:43:33.612 14:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:43:33.612 14:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:43:33.871 14:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:43:33.871 14:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:43:33.871 14:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:33.871 14:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:33.871 14:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:33.871 14:56:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:02:NjQ4YzA4NGQ4YjdlMTFjN2Q1MzIzNjgwZjVlZmMwMzhmYzEwNDZlYTM1MmFlMGNhRA/YTw==: --dhchap-ctrl-secret DHHC-1:01:NGU5MTI2ZDA0NGJmMGQ4YTlhYjViYTc5NWEwMzBmYjT3AH/i: 00:43:34.440 14:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:34.440 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:34.440 14:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:43:34.440 14:56:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:34.440 14:56:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:34.440 14:56:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:34.440 14:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:43:34.440 14:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:43:34.440 14:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:43:34.700 14:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:43:34.700 14:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:43:34.700 14:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:43:34.700 14:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:43:34.700 14:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:43:34.700 14:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:34.700 14:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key3 00:43:34.700 14:56:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:34.700 14:56:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:34.700 14:56:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:34.700 14:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:43:34.700 14:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:43:34.959 00:43:34.959 14:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:43:34.959 14:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:34.959 14:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:43:35.219 14:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:35.219 14:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:35.219 14:56:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:35.219 14:56:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:35.219 14:56:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:35.219 14:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:43:35.219 { 00:43:35.219 "auth": { 00:43:35.219 "dhgroup": "ffdhe4096", 00:43:35.219 "digest": "sha512", 00:43:35.219 "state": "completed" 00:43:35.219 }, 00:43:35.219 "cntlid": 127, 00:43:35.219 "listen_address": { 00:43:35.219 "adrfam": "IPv4", 00:43:35.219 "traddr": "10.0.0.2", 00:43:35.219 "trsvcid": "4420", 00:43:35.219 "trtype": "TCP" 00:43:35.219 }, 00:43:35.219 "peer_address": { 00:43:35.219 "adrfam": "IPv4", 00:43:35.219 "traddr": "10.0.0.1", 00:43:35.219 "trsvcid": "36456", 00:43:35.219 "trtype": "TCP" 00:43:35.219 }, 00:43:35.219 "qid": 0, 00:43:35.219 "state": "enabled" 00:43:35.219 } 00:43:35.219 ]' 00:43:35.219 14:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:43:35.219 14:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:43:35.219 14:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:43:35.478 14:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:43:35.478 14:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:43:35.478 14:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:35.478 14:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:35.478 14:56:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:35.478 14:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:03:MDc3MDZlZDdlNzJhZWQyM2RkODI2YzU4NjM4NGJmM2MyOWMzN2ViMGNlMDQzMTUyMzc0MTQyMGJjY2UwMWJhMPYII3I=: 00:43:36.047 14:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:36.047 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:36.047 14:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:43:36.047 14:56:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:36.047 14:56:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:36.047 14:56:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:36.047 14:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:43:36.047 14:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:43:36.047 14:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:43:36.047 14:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:43:36.307 14:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:43:36.307 14:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:43:36.307 14:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:43:36.307 14:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:43:36.307 14:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:43:36.307 14:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:36.307 14:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:36.307 14:56:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:36.307 14:56:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:36.307 14:56:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:36.308 14:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:36.308 14:56:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:36.876 00:43:36.876 14:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:43:36.876 14:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:43:36.876 14:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:36.876 14:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:36.876 14:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:36.876 14:56:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:36.876 14:56:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:36.876 14:56:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:36.876 14:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:43:36.876 { 00:43:36.876 "auth": { 00:43:36.876 "dhgroup": "ffdhe6144", 00:43:36.876 "digest": "sha512", 00:43:36.876 "state": "completed" 00:43:36.876 }, 00:43:36.876 "cntlid": 129, 00:43:36.876 "listen_address": { 00:43:36.876 "adrfam": "IPv4", 00:43:36.876 "traddr": "10.0.0.2", 00:43:36.876 "trsvcid": "4420", 00:43:36.876 "trtype": "TCP" 00:43:36.876 }, 00:43:36.876 "peer_address": { 00:43:36.876 "adrfam": "IPv4", 00:43:36.876 "traddr": "10.0.0.1", 00:43:36.876 "trsvcid": "36490", 00:43:36.876 "trtype": "TCP" 00:43:36.876 }, 00:43:36.876 "qid": 0, 00:43:36.876 "state": "enabled" 00:43:36.876 } 00:43:36.876 ]' 00:43:36.876 14:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:43:37.135 14:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:43:37.135 14:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:43:37.135 14:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:43:37.135 14:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:43:37.135 14:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:37.135 14:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:37.135 14:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:37.394 14:56:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:00:OTcxYzUzZGZkNGQyYWRkYmVmNTEyNDU4MDE3ZmUyN2ZkMDg2MDg5YThkYjAzNzVmDZK9aA==: --dhchap-ctrl-secret DHHC-1:03:ZmMyYTkzYTQ2NjEzZGJhMDI3ODYxYTFhYzM4YWI2MDUwZmUzOGRjNThlNWY3MTJhOTIyY2M2Y2YyNzE0ZDk3N2PNUpc=: 00:43:37.963 14:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:37.963 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:37.963 14:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:43:37.963 14:56:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:37.963 14:56:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:37.963 14:56:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:37.963 14:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:43:37.963 14:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:43:37.963 14:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:43:37.963 14:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:43:37.963 14:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:43:37.963 14:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:43:37.963 14:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:43:37.964 14:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:43:37.964 14:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:37.964 14:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:37.964 14:56:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:37.964 14:56:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:37.964 14:56:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:37.964 14:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:37.964 14:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:38.534 00:43:38.534 14:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:43:38.534 14:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:43:38.534 14:56:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:38.794 14:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:38.794 14:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:38.794 14:56:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:38.794 14:56:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:38.794 14:56:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:38.794 14:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:43:38.794 { 00:43:38.794 "auth": { 00:43:38.794 "dhgroup": "ffdhe6144", 00:43:38.794 "digest": "sha512", 00:43:38.794 "state": "completed" 00:43:38.794 }, 00:43:38.794 "cntlid": 131, 00:43:38.794 "listen_address": { 00:43:38.794 "adrfam": "IPv4", 00:43:38.794 "traddr": "10.0.0.2", 00:43:38.794 "trsvcid": "4420", 00:43:38.794 "trtype": "TCP" 00:43:38.794 }, 00:43:38.794 "peer_address": { 00:43:38.794 "adrfam": "IPv4", 00:43:38.794 "traddr": "10.0.0.1", 00:43:38.794 "trsvcid": "36506", 00:43:38.794 "trtype": "TCP" 00:43:38.794 }, 00:43:38.794 "qid": 0, 00:43:38.794 "state": "enabled" 00:43:38.794 } 00:43:38.794 ]' 00:43:38.794 14:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:43:38.794 14:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:43:38.794 14:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:43:38.794 14:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:43:38.794 14:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:43:38.794 14:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:38.794 14:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:38.794 14:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:39.054 14:56:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:01:ZTNjYjU0NDg1MDk3NTViZDMxYzJhOTIxMDQxYTVlMjSqZTCF: --dhchap-ctrl-secret DHHC-1:02:YWMzZjc2MTU0NzAwZWYwMmVmMmZhMzRhYmM0NDhiZTcxMGVjNTZmMmQ1NmIxMzU53N4GZA==: 00:43:39.624 14:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:39.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:39.624 14:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:43:39.624 14:56:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:39.624 14:56:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:39.624 14:56:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:39.624 14:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:43:39.624 14:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:43:39.624 14:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:43:39.884 14:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:43:39.884 14:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:43:39.884 14:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:43:39.884 14:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:43:39.884 14:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:43:39.884 14:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:39.884 14:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:39.884 14:56:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:39.884 14:56:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:39.884 14:56:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:39.884 14:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:39.884 14:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:40.143 00:43:40.143 14:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:43:40.143 14:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:43:40.143 14:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:40.403 14:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:40.404 14:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:40.404 14:56:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:40.404 14:56:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:40.404 14:56:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:40.404 14:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:43:40.404 { 00:43:40.404 "auth": { 00:43:40.404 "dhgroup": "ffdhe6144", 00:43:40.404 "digest": "sha512", 00:43:40.404 "state": "completed" 00:43:40.404 }, 00:43:40.404 "cntlid": 133, 00:43:40.404 "listen_address": { 00:43:40.404 "adrfam": "IPv4", 00:43:40.404 "traddr": "10.0.0.2", 00:43:40.404 "trsvcid": "4420", 00:43:40.404 "trtype": "TCP" 00:43:40.404 }, 00:43:40.404 "peer_address": { 00:43:40.404 "adrfam": "IPv4", 00:43:40.404 "traddr": "10.0.0.1", 00:43:40.404 "trsvcid": "36528", 00:43:40.404 "trtype": "TCP" 00:43:40.404 }, 00:43:40.404 "qid": 0, 00:43:40.404 "state": "enabled" 00:43:40.404 } 00:43:40.404 ]' 00:43:40.404 14:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:43:40.404 14:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:43:40.404 14:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:43:40.404 14:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:43:40.404 14:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:43:40.404 14:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:40.404 14:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:40.404 14:56:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:40.663 14:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:02:NjQ4YzA4NGQ4YjdlMTFjN2Q1MzIzNjgwZjVlZmMwMzhmYzEwNDZlYTM1MmFlMGNhRA/YTw==: --dhchap-ctrl-secret DHHC-1:01:NGU5MTI2ZDA0NGJmMGQ4YTlhYjViYTc5NWEwMzBmYjT3AH/i: 00:43:41.232 14:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:41.232 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:41.232 14:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:43:41.232 14:57:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:41.232 14:57:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:41.232 14:57:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:41.232 14:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:43:41.232 14:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:43:41.232 14:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:43:41.491 14:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:43:41.491 14:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:43:41.491 14:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:43:41.491 14:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:43:41.491 14:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:43:41.491 14:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:41.491 14:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key3 00:43:41.491 14:57:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:41.491 14:57:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:41.491 14:57:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:41.491 14:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:43:41.491 14:57:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:43:41.751 00:43:41.751 14:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:43:41.751 14:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:43:41.751 14:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:42.011 14:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:42.011 14:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:42.011 14:57:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:42.011 14:57:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:42.011 14:57:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:42.011 14:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:43:42.011 { 00:43:42.011 "auth": { 00:43:42.011 "dhgroup": "ffdhe6144", 00:43:42.011 "digest": "sha512", 00:43:42.011 "state": "completed" 00:43:42.011 }, 00:43:42.011 "cntlid": 135, 00:43:42.011 "listen_address": { 00:43:42.011 "adrfam": "IPv4", 00:43:42.011 "traddr": "10.0.0.2", 00:43:42.011 "trsvcid": "4420", 00:43:42.011 "trtype": "TCP" 00:43:42.011 }, 00:43:42.011 "peer_address": { 00:43:42.011 "adrfam": "IPv4", 00:43:42.011 "traddr": "10.0.0.1", 00:43:42.011 "trsvcid": "43774", 00:43:42.011 "trtype": "TCP" 00:43:42.011 }, 00:43:42.011 "qid": 0, 00:43:42.011 "state": "enabled" 00:43:42.011 } 00:43:42.011 ]' 00:43:42.011 14:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:43:42.011 14:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:43:42.011 14:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:43:42.011 14:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:43:42.271 14:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:43:42.271 14:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:42.271 14:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:42.271 14:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:42.271 14:57:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:03:MDc3MDZlZDdlNzJhZWQyM2RkODI2YzU4NjM4NGJmM2MyOWMzN2ViMGNlMDQzMTUyMzc0MTQyMGJjY2UwMWJhMPYII3I=: 00:43:42.872 14:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:42.872 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:42.872 14:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:43:42.872 14:57:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:42.872 14:57:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:42.872 14:57:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:42.872 14:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:43:42.872 14:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:43:42.872 14:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:43:42.872 14:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:43:43.131 14:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:43:43.131 14:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:43:43.131 14:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:43:43.131 14:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:43:43.131 14:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:43:43.131 14:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:43.131 14:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:43.131 14:57:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:43.131 14:57:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:43.131 14:57:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:43.131 14:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:43.131 14:57:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:43.699 00:43:43.699 14:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:43:43.699 14:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:43.699 14:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:43:43.958 14:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:43.958 14:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:43.958 14:57:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:43.958 14:57:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:43.958 14:57:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:43.958 14:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:43:43.958 { 00:43:43.958 "auth": { 00:43:43.958 "dhgroup": "ffdhe8192", 00:43:43.958 "digest": "sha512", 00:43:43.958 "state": "completed" 00:43:43.958 }, 00:43:43.958 "cntlid": 137, 00:43:43.958 "listen_address": { 00:43:43.958 "adrfam": "IPv4", 00:43:43.958 "traddr": "10.0.0.2", 00:43:43.958 "trsvcid": "4420", 00:43:43.958 "trtype": "TCP" 00:43:43.958 }, 00:43:43.958 "peer_address": { 00:43:43.958 "adrfam": "IPv4", 00:43:43.958 "traddr": "10.0.0.1", 00:43:43.958 "trsvcid": "43808", 00:43:43.958 "trtype": "TCP" 00:43:43.958 }, 00:43:43.958 "qid": 0, 00:43:43.958 "state": "enabled" 00:43:43.958 } 00:43:43.959 ]' 00:43:43.959 14:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:43:43.959 14:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:43:43.959 14:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:43:43.959 14:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:43:43.959 14:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:43:43.959 14:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:43.959 14:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:43.959 14:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:44.218 14:57:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:00:OTcxYzUzZGZkNGQyYWRkYmVmNTEyNDU4MDE3ZmUyN2ZkMDg2MDg5YThkYjAzNzVmDZK9aA==: --dhchap-ctrl-secret DHHC-1:03:ZmMyYTkzYTQ2NjEzZGJhMDI3ODYxYTFhYzM4YWI2MDUwZmUzOGRjNThlNWY3MTJhOTIyY2M2Y2YyNzE0ZDk3N2PNUpc=: 00:43:44.787 14:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:44.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:44.787 14:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:43:44.787 14:57:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:44.787 14:57:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:44.787 14:57:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:44.787 14:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:43:44.787 14:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:43:44.787 14:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:43:45.046 14:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:43:45.046 14:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:43:45.046 14:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:43:45.046 14:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:43:45.046 14:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:43:45.046 14:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:45.046 14:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:45.046 14:57:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:45.046 14:57:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:45.046 14:57:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:45.046 14:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:45.046 14:57:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:45.615 00:43:45.615 14:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:43:45.615 14:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:45.615 14:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:43:45.615 14:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:45.615 14:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:45.615 14:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:45.615 14:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:45.615 14:57:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:45.615 14:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:43:45.615 { 00:43:45.615 "auth": { 00:43:45.615 "dhgroup": "ffdhe8192", 00:43:45.615 "digest": "sha512", 00:43:45.615 "state": "completed" 00:43:45.615 }, 00:43:45.615 "cntlid": 139, 00:43:45.615 "listen_address": { 00:43:45.615 "adrfam": "IPv4", 00:43:45.615 "traddr": "10.0.0.2", 00:43:45.615 "trsvcid": "4420", 00:43:45.615 "trtype": "TCP" 00:43:45.615 }, 00:43:45.615 "peer_address": { 00:43:45.615 "adrfam": "IPv4", 00:43:45.615 "traddr": "10.0.0.1", 00:43:45.615 "trsvcid": "43834", 00:43:45.615 "trtype": "TCP" 00:43:45.615 }, 00:43:45.615 "qid": 0, 00:43:45.615 "state": "enabled" 00:43:45.615 } 00:43:45.615 ]' 00:43:45.615 14:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:43:45.874 14:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:43:45.874 14:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:43:45.874 14:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:43:45.874 14:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:43:45.874 14:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:45.875 14:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:45.875 14:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:46.134 14:57:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:01:ZTNjYjU0NDg1MDk3NTViZDMxYzJhOTIxMDQxYTVlMjSqZTCF: --dhchap-ctrl-secret DHHC-1:02:YWMzZjc2MTU0NzAwZWYwMmVmMmZhMzRhYmM0NDhiZTcxMGVjNTZmMmQ1NmIxMzU53N4GZA==: 00:43:46.702 14:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:46.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:46.702 14:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:43:46.702 14:57:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:46.702 14:57:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:46.702 14:57:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:46.702 14:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:43:46.702 14:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:43:46.702 14:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:43:46.961 14:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:43:46.961 14:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:43:46.961 14:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:43:46.961 14:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:43:46.961 14:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:43:46.961 14:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:46.961 14:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:46.961 14:57:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:46.961 14:57:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:46.961 14:57:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:46.961 14:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:46.961 14:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:43:47.529 00:43:47.529 14:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:43:47.529 14:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:43:47.529 14:57:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:47.529 14:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:47.529 14:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:47.529 14:57:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:47.529 14:57:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:47.529 14:57:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:47.529 14:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:43:47.529 { 00:43:47.529 "auth": { 00:43:47.529 "dhgroup": "ffdhe8192", 00:43:47.529 "digest": "sha512", 00:43:47.529 "state": "completed" 00:43:47.529 }, 00:43:47.529 "cntlid": 141, 00:43:47.529 "listen_address": { 00:43:47.529 "adrfam": "IPv4", 00:43:47.529 "traddr": "10.0.0.2", 00:43:47.530 "trsvcid": "4420", 00:43:47.530 "trtype": "TCP" 00:43:47.530 }, 00:43:47.530 "peer_address": { 00:43:47.530 "adrfam": "IPv4", 00:43:47.530 "traddr": "10.0.0.1", 00:43:47.530 "trsvcid": "43858", 00:43:47.530 "trtype": "TCP" 00:43:47.530 }, 00:43:47.530 "qid": 0, 00:43:47.530 "state": "enabled" 00:43:47.530 } 00:43:47.530 ]' 00:43:47.530 14:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:43:47.789 14:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:43:47.789 14:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:43:47.789 14:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:43:47.789 14:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:43:47.789 14:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:47.789 14:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:47.789 14:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:48.048 14:57:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:02:NjQ4YzA4NGQ4YjdlMTFjN2Q1MzIzNjgwZjVlZmMwMzhmYzEwNDZlYTM1MmFlMGNhRA/YTw==: --dhchap-ctrl-secret DHHC-1:01:NGU5MTI2ZDA0NGJmMGQ4YTlhYjViYTc5NWEwMzBmYjT3AH/i: 00:43:48.616 14:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:48.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:48.616 14:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:43:48.616 14:57:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:48.616 14:57:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:48.616 14:57:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:48.616 14:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:43:48.616 14:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:43:48.616 14:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:43:48.616 14:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:43:48.616 14:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:43:48.616 14:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:43:48.616 14:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:43:48.616 14:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:43:48.616 14:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:48.616 14:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key3 00:43:48.616 14:57:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:48.616 14:57:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:48.616 14:57:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:48.616 14:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:43:48.616 14:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:43:49.185 00:43:49.185 14:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:43:49.185 14:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:43:49.185 14:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:49.444 14:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:49.444 14:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:49.444 14:57:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:49.444 14:57:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:49.444 14:57:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:49.444 14:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:43:49.444 { 00:43:49.444 "auth": { 00:43:49.444 "dhgroup": "ffdhe8192", 00:43:49.444 "digest": "sha512", 00:43:49.444 "state": "completed" 00:43:49.444 }, 00:43:49.444 "cntlid": 143, 00:43:49.444 "listen_address": { 00:43:49.444 "adrfam": "IPv4", 00:43:49.444 "traddr": "10.0.0.2", 00:43:49.444 "trsvcid": "4420", 00:43:49.444 "trtype": "TCP" 00:43:49.444 }, 00:43:49.444 "peer_address": { 00:43:49.444 "adrfam": "IPv4", 00:43:49.444 "traddr": "10.0.0.1", 00:43:49.444 "trsvcid": "43898", 00:43:49.444 "trtype": "TCP" 00:43:49.444 }, 00:43:49.444 "qid": 0, 00:43:49.444 "state": "enabled" 00:43:49.444 } 00:43:49.444 ]' 00:43:49.444 14:57:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:43:49.444 14:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:43:49.445 14:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:43:49.445 14:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:43:49.445 14:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:43:49.704 14:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:49.704 14:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:49.704 14:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:49.704 14:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:03:MDc3MDZlZDdlNzJhZWQyM2RkODI2YzU4NjM4NGJmM2MyOWMzN2ViMGNlMDQzMTUyMzc0MTQyMGJjY2UwMWJhMPYII3I=: 00:43:50.272 14:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:50.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:50.272 14:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:43:50.272 14:57:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:50.272 14:57:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:50.272 14:57:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:50.272 14:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:43:50.272 14:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:43:50.272 14:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:43:50.273 14:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:43:50.273 14:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:43:50.273 14:57:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:43:50.531 14:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:43:50.531 14:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:43:50.531 14:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:43:50.531 14:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:43:50.531 14:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:43:50.531 14:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:50.531 14:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:50.531 14:57:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:50.531 14:57:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:50.531 14:57:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:50.531 14:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:50.531 14:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:43:51.142 00:43:51.142 14:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:43:51.142 14:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:43:51.142 14:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:51.401 14:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:51.401 14:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:51.401 14:57:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:51.401 14:57:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:51.401 14:57:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:51.401 14:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:43:51.401 { 00:43:51.401 "auth": { 00:43:51.401 "dhgroup": "ffdhe8192", 00:43:51.401 "digest": "sha512", 00:43:51.401 "state": "completed" 00:43:51.401 }, 00:43:51.401 "cntlid": 145, 00:43:51.401 "listen_address": { 00:43:51.401 "adrfam": "IPv4", 00:43:51.401 "traddr": "10.0.0.2", 00:43:51.401 "trsvcid": "4420", 00:43:51.401 "trtype": "TCP" 00:43:51.401 }, 00:43:51.401 "peer_address": { 00:43:51.401 "adrfam": "IPv4", 00:43:51.401 "traddr": "10.0.0.1", 00:43:51.401 "trsvcid": "43922", 00:43:51.401 "trtype": "TCP" 00:43:51.401 }, 00:43:51.401 "qid": 0, 00:43:51.401 "state": "enabled" 00:43:51.401 } 00:43:51.401 ]' 00:43:51.401 14:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:43:51.401 14:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:43:51.401 14:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:43:51.401 14:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:43:51.401 14:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:43:51.401 14:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:51.401 14:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:51.401 14:57:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:51.659 14:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:00:OTcxYzUzZGZkNGQyYWRkYmVmNTEyNDU4MDE3ZmUyN2ZkMDg2MDg5YThkYjAzNzVmDZK9aA==: --dhchap-ctrl-secret DHHC-1:03:ZmMyYTkzYTQ2NjEzZGJhMDI3ODYxYTFhYzM4YWI2MDUwZmUzOGRjNThlNWY3MTJhOTIyY2M2Y2YyNzE0ZDk3N2PNUpc=: 00:43:52.228 14:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:52.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:52.228 14:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:43:52.228 14:57:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:52.228 14:57:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:52.228 14:57:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:52.228 14:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key1 00:43:52.228 14:57:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:52.228 14:57:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:52.228 14:57:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:52.228 14:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:43:52.228 14:57:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:43:52.228 14:57:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:43:52.228 14:57:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:43:52.228 14:57:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:52.228 14:57:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:43:52.228 14:57:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:52.228 14:57:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:43:52.228 14:57:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:43:52.797 2024/07/22 14:57:12 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 dhchap_key:key2 hostnqn:nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 name:nvme0 subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:43:52.797 request: 00:43:52.797 { 00:43:52.797 "method": "bdev_nvme_attach_controller", 00:43:52.797 "params": { 00:43:52.797 "name": "nvme0", 00:43:52.797 "trtype": "tcp", 00:43:52.797 "traddr": "10.0.0.2", 00:43:52.797 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5", 00:43:52.797 "adrfam": "ipv4", 00:43:52.797 "trsvcid": "4420", 00:43:52.797 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:43:52.797 "dhchap_key": "key2" 00:43:52.797 } 00:43:52.797 } 00:43:52.797 Got JSON-RPC error response 00:43:52.797 GoRPCClient: error on JSON-RPC call 00:43:52.797 14:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:43:52.797 14:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:43:52.797 14:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:43:52.797 14:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:43:52.797 14:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:43:52.797 14:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:52.797 14:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:52.797 14:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:52.797 14:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:52.797 14:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:52.797 14:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:52.797 14:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:52.797 14:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:43:52.797 14:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:43:52.797 14:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:43:52.797 14:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:43:52.797 14:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:52.797 14:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:43:52.797 14:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:52.797 14:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:43:52.797 14:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:43:53.366 2024/07/22 14:57:12 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 dhchap_ctrlr_key:ckey2 dhchap_key:key1 hostnqn:nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 name:nvme0 subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:43:53.366 request: 00:43:53.366 { 00:43:53.366 "method": "bdev_nvme_attach_controller", 00:43:53.366 "params": { 00:43:53.367 "name": "nvme0", 00:43:53.367 "trtype": "tcp", 00:43:53.367 "traddr": "10.0.0.2", 00:43:53.367 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5", 00:43:53.367 "adrfam": "ipv4", 00:43:53.367 "trsvcid": "4420", 00:43:53.367 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:43:53.367 "dhchap_key": "key1", 00:43:53.367 "dhchap_ctrlr_key": "ckey2" 00:43:53.367 } 00:43:53.367 } 00:43:53.367 Got JSON-RPC error response 00:43:53.367 GoRPCClient: error on JSON-RPC call 00:43:53.367 14:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:43:53.367 14:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:43:53.367 14:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:43:53.367 14:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:43:53.367 14:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:43:53.367 14:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:53.367 14:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:53.367 14:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:53.367 14:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key1 00:43:53.367 14:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:53.367 14:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:53.367 14:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:53.367 14:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:53.367 14:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:43:53.367 14:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:53.367 14:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:43:53.367 14:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:53.367 14:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:43:53.367 14:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:53.367 14:57:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:53.367 14:57:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:43:53.936 2024/07/22 14:57:13 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 dhchap_ctrlr_key:ckey1 dhchap_key:key1 hostnqn:nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 name:nvme0 subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:43:53.936 request: 00:43:53.936 { 00:43:53.936 "method": "bdev_nvme_attach_controller", 00:43:53.936 "params": { 00:43:53.936 "name": "nvme0", 00:43:53.936 "trtype": "tcp", 00:43:53.936 "traddr": "10.0.0.2", 00:43:53.936 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5", 00:43:53.936 "adrfam": "ipv4", 00:43:53.936 "trsvcid": "4420", 00:43:53.936 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:43:53.936 "dhchap_key": "key1", 00:43:53.936 "dhchap_ctrlr_key": "ckey1" 00:43:53.936 } 00:43:53.936 } 00:43:53.936 Got JSON-RPC error response 00:43:53.936 GoRPCClient: error on JSON-RPC call 00:43:53.936 14:57:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:43:53.936 14:57:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:43:53.936 14:57:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:43:53.936 14:57:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:43:53.936 14:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:43:53.936 14:57:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:53.936 14:57:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:53.936 14:57:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:53.936 14:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 93145 00:43:53.936 14:57:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 93145 ']' 00:43:53.936 14:57:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 93145 00:43:53.936 14:57:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:43:53.936 14:57:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:43:53.936 14:57:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 93145 00:43:53.936 14:57:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:43:53.936 14:57:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:43:53.936 killing process with pid 93145 00:43:53.936 14:57:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 93145' 00:43:53.936 14:57:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 93145 00:43:53.936 14:57:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 93145 00:43:53.936 14:57:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:43:53.936 14:57:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:43:53.936 14:57:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:43:53.936 14:57:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:54.196 14:57:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=97693 00:43:54.196 14:57:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:43:54.196 14:57:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 97693 00:43:54.196 14:57:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 97693 ']' 00:43:54.196 14:57:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:54.196 14:57:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:43:54.196 14:57:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:54.196 14:57:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:43:54.196 14:57:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:55.151 14:57:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:43:55.151 14:57:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:43:55.151 14:57:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:43:55.151 14:57:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:55.151 14:57:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:55.151 14:57:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:55.151 14:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:43:55.151 14:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 97693 00:43:55.151 14:57:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 97693 ']' 00:43:55.151 14:57:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:55.151 14:57:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:43:55.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:55.151 14:57:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:55.151 14:57:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:43:55.151 14:57:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:55.151 14:57:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:43:55.151 14:57:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:43:55.151 14:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:43:55.151 14:57:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:55.151 14:57:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:55.411 14:57:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:55.411 14:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:43:55.411 14:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:43:55.411 14:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:43:55.411 14:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:43:55.411 14:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:43:55.411 14:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:43:55.411 14:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key3 00:43:55.411 14:57:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:55.411 14:57:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:55.411 14:57:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:55.411 14:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:43:55.411 14:57:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:43:55.980 00:43:55.980 14:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:43:55.980 14:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:43:55.980 14:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:55.980 14:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:55.980 14:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:43:55.980 14:57:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:55.980 14:57:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:55.980 14:57:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:55.980 14:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:43:55.980 { 00:43:55.980 "auth": { 00:43:55.980 "dhgroup": "ffdhe8192", 00:43:55.980 "digest": "sha512", 00:43:55.980 "state": "completed" 00:43:55.980 }, 00:43:55.980 "cntlid": 1, 00:43:55.980 "listen_address": { 00:43:55.980 "adrfam": "IPv4", 00:43:55.980 "traddr": "10.0.0.2", 00:43:55.980 "trsvcid": "4420", 00:43:55.980 "trtype": "TCP" 00:43:55.980 }, 00:43:55.980 "peer_address": { 00:43:55.980 "adrfam": "IPv4", 00:43:55.980 "traddr": "10.0.0.1", 00:43:55.980 "trsvcid": "58354", 00:43:55.980 "trtype": "TCP" 00:43:55.980 }, 00:43:55.980 "qid": 0, 00:43:55.980 "state": "enabled" 00:43:55.980 } 00:43:55.980 ]' 00:43:56.239 14:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:43:56.239 14:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:43:56.239 14:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:43:56.239 14:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:43:56.239 14:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:43:56.239 14:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:43:56.239 14:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:56.239 14:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:56.498 14:57:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid 03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-secret DHHC-1:03:MDc3MDZlZDdlNzJhZWQyM2RkODI2YzU4NjM4NGJmM2MyOWMzN2ViMGNlMDQzMTUyMzc0MTQyMGJjY2UwMWJhMPYII3I=: 00:43:57.067 14:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:43:57.067 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:43:57.067 14:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:43:57.067 14:57:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:57.067 14:57:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:57.067 14:57:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:57.067 14:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --dhchap-key key3 00:43:57.067 14:57:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:57.067 14:57:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:57.067 14:57:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:57.067 14:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:43:57.067 14:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:43:57.327 14:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:43:57.327 14:57:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:43:57.327 14:57:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:43:57.327 14:57:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:43:57.327 14:57:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:57.327 14:57:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:43:57.327 14:57:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:57.327 14:57:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:43:57.327 14:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:43:57.327 2024/07/22 14:57:16 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 dhchap_key:key3 hostnqn:nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 name:nvme0 subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:43:57.327 request: 00:43:57.327 { 00:43:57.327 "method": "bdev_nvme_attach_controller", 00:43:57.327 "params": { 00:43:57.327 "name": "nvme0", 00:43:57.327 "trtype": "tcp", 00:43:57.327 "traddr": "10.0.0.2", 00:43:57.327 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5", 00:43:57.327 "adrfam": "ipv4", 00:43:57.327 "trsvcid": "4420", 00:43:57.327 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:43:57.327 "dhchap_key": "key3" 00:43:57.327 } 00:43:57.327 } 00:43:57.327 Got JSON-RPC error response 00:43:57.327 GoRPCClient: error on JSON-RPC call 00:43:57.327 14:57:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:43:57.327 14:57:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:43:57.327 14:57:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:43:57.327 14:57:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:43:57.327 14:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:43:57.327 14:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:43:57.327 14:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:43:57.327 14:57:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:43:57.586 14:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:43:57.586 14:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:43:57.586 14:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:43:57.586 14:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:43:57.586 14:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:57.586 14:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:43:57.586 14:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:57.586 14:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:43:57.587 14:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:43:57.846 2024/07/22 14:57:17 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 dhchap_key:key3 hostnqn:nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 name:nvme0 subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:43:57.846 request: 00:43:57.846 { 00:43:57.846 "method": "bdev_nvme_attach_controller", 00:43:57.846 "params": { 00:43:57.846 "name": "nvme0", 00:43:57.846 "trtype": "tcp", 00:43:57.846 "traddr": "10.0.0.2", 00:43:57.846 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5", 00:43:57.846 "adrfam": "ipv4", 00:43:57.846 "trsvcid": "4420", 00:43:57.846 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:43:57.846 "dhchap_key": "key3" 00:43:57.846 } 00:43:57.846 } 00:43:57.846 Got JSON-RPC error response 00:43:57.846 GoRPCClient: error on JSON-RPC call 00:43:57.846 14:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:43:57.846 14:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:43:57.846 14:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:43:57.846 14:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:43:57.846 14:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:43:57.846 14:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:43:57.846 14:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:43:57.846 14:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:43:57.846 14:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:43:57.846 14:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:43:58.106 14:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:43:58.106 14:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:58.106 14:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:58.106 14:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:58.106 14:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:43:58.106 14:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:58.106 14:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:58.106 14:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:58.106 14:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:43:58.106 14:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:43:58.106 14:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:43:58.106 14:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:43:58.106 14:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:58.106 14:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:43:58.106 14:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:43:58.106 14:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:43:58.106 14:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:43:58.366 2024/07/22 14:57:17 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 dhchap_ctrlr_key:key1 dhchap_key:key0 hostnqn:nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 name:nvme0 subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:43:58.366 request: 00:43:58.366 { 00:43:58.366 "method": "bdev_nvme_attach_controller", 00:43:58.366 "params": { 00:43:58.366 "name": "nvme0", 00:43:58.366 "trtype": "tcp", 00:43:58.366 "traddr": "10.0.0.2", 00:43:58.366 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5", 00:43:58.366 "adrfam": "ipv4", 00:43:58.366 "trsvcid": "4420", 00:43:58.366 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:43:58.366 "dhchap_key": "key0", 00:43:58.366 "dhchap_ctrlr_key": "key1" 00:43:58.366 } 00:43:58.366 } 00:43:58.366 Got JSON-RPC error response 00:43:58.366 GoRPCClient: error on JSON-RPC call 00:43:58.366 14:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:43:58.366 14:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:43:58.366 14:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:43:58.366 14:57:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:43:58.366 14:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:43:58.366 14:57:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:43:58.626 00:43:58.626 14:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:43:58.626 14:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:43:58.626 14:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:43:58.886 14:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:43:58.886 14:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:43:58.886 14:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:43:58.886 14:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:43:58.886 14:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:43:58.886 14:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 93189 00:43:58.886 14:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 93189 ']' 00:43:58.886 14:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 93189 00:43:58.886 14:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:43:58.886 14:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:43:58.886 14:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 93189 00:43:58.886 14:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:43:58.886 14:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:43:59.146 killing process with pid 93189 00:43:59.146 14:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 93189' 00:43:59.146 14:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 93189 00:43:59.146 14:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 93189 00:43:59.406 14:57:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:43:59.406 14:57:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:43:59.406 14:57:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:43:59.406 14:57:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:43:59.406 14:57:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:43:59.406 14:57:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:43:59.406 14:57:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:43:59.406 rmmod nvme_tcp 00:43:59.406 rmmod nvme_fabrics 00:43:59.406 rmmod nvme_keyring 00:43:59.406 14:57:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:43:59.406 14:57:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:43:59.406 14:57:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:43:59.406 14:57:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 97693 ']' 00:43:59.406 14:57:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 97693 00:43:59.406 14:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 97693 ']' 00:43:59.406 14:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 97693 00:43:59.406 14:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:43:59.406 14:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:43:59.406 14:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 97693 00:43:59.406 14:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:43:59.406 14:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:43:59.406 killing process with pid 97693 00:43:59.406 14:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 97693' 00:43:59.406 14:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 97693 00:43:59.406 14:57:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 97693 00:43:59.666 14:57:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:43:59.666 14:57:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:43:59.666 14:57:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:43:59.666 14:57:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:43:59.666 14:57:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:43:59.666 14:57:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:59.666 14:57:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:43:59.666 14:57:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:59.666 14:57:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:43:59.666 14:57:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.HTy /tmp/spdk.key-sha256.X2H /tmp/spdk.key-sha384.swU /tmp/spdk.key-sha512.urs /tmp/spdk.key-sha512.WlG /tmp/spdk.key-sha384.vUC /tmp/spdk.key-sha256.mUL '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:43:59.666 00:43:59.666 real 2m17.709s 00:43:59.666 user 5m29.364s 00:43:59.666 sys 0m19.172s 00:43:59.666 14:57:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:43:59.666 14:57:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:43:59.666 ************************************ 00:43:59.666 END TEST nvmf_auth_target 00:43:59.666 ************************************ 00:43:59.926 14:57:19 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:43:59.926 14:57:19 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:43:59.926 14:57:19 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:43:59.926 14:57:19 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:43:59.926 14:57:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:59.926 ************************************ 00:43:59.926 START TEST nvmf_bdevio_no_huge 00:43:59.926 ************************************ 00:43:59.926 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:43:59.926 * Looking for test storage... 00:43:59.926 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:43:59.926 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:43:59.926 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:43:59.926 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:59.926 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:59.926 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:59.926 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:59.926 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:59.926 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:59.926 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:59.926 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:59.926 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:59.926 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:59.926 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:43:59.926 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:43:59.926 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:59.926 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:59.926 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:43:59.926 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:59.926 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:43:59.926 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:59.926 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:59.926 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:59.926 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:59.926 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:59.926 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:59.926 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:43:59.926 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:59.926 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:43:59.926 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:43:59.926 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:43:59.926 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:59.926 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:59.926 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:59.926 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:43:59.926 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:43:59.926 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:43:59.926 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:43:59.926 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:43:59.926 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:43:59.926 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:43:59.927 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:59.927 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:43:59.927 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:43:59.927 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:43:59.927 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:59.927 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:43:59.927 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:59.927 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:43:59.927 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:43:59.927 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:43:59.927 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:43:59.927 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:43:59.927 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:43:59.927 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:59.927 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:59.927 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:43:59.927 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:43:59.927 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:43:59.927 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:43:59.927 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:43:59.927 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:59.927 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:43:59.927 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:43:59.927 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:43:59.927 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:43:59.927 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:43:59.927 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:43:59.927 Cannot find device "nvmf_tgt_br" 00:43:59.927 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:43:59.927 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:43:59.927 Cannot find device "nvmf_tgt_br2" 00:43:59.927 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:43:59.927 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:43:59.927 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:43:59.927 Cannot find device "nvmf_tgt_br" 00:43:59.927 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:43:59.927 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:44:00.187 Cannot find device "nvmf_tgt_br2" 00:44:00.187 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:44:00.187 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:44:00.187 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:44:00.187 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:44:00.187 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:44:00.187 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:44:00.187 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:44:00.187 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:44:00.187 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:44:00.187 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:44:00.187 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:44:00.187 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:44:00.187 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:44:00.187 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:44:00.187 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:44:00.187 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:44:00.187 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:44:00.187 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:44:00.187 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:44:00.187 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:44:00.187 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:44:00.187 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:44:00.187 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:44:00.187 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:44:00.187 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:44:00.187 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:44:00.187 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:44:00.187 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:44:00.187 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:44:00.187 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:44:00.187 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:44:00.187 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:44:00.187 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:44:00.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:00.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:44:00.187 00:44:00.187 --- 10.0.0.2 ping statistics --- 00:44:00.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:00.187 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:44:00.187 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:44:00.187 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:44:00.187 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:44:00.187 00:44:00.187 --- 10.0.0.3 ping statistics --- 00:44:00.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:00.187 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:44:00.187 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:44:00.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:00.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:44:00.187 00:44:00.187 --- 10.0.0.1 ping statistics --- 00:44:00.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:00.187 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:44:00.187 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:00.187 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:44:00.188 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:44:00.188 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:00.188 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:44:00.188 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:44:00.188 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:00.188 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:44:00.188 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:44:00.188 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:44:00.188 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:44:00.188 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@720 -- # xtrace_disable 00:44:00.188 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:44:00.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:00.447 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=98087 00:44:00.447 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 98087 00:44:00.447 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # '[' -z 98087 ']' 00:44:00.447 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:44:00.447 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:00.447 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local max_retries=100 00:44:00.447 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:00.447 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # xtrace_disable 00:44:00.447 14:57:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:44:00.447 [2024-07-22 14:57:19.871558] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:44:00.447 [2024-07-22 14:57:19.871630] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:44:00.447 [2024-07-22 14:57:19.999556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:44:00.707 [2024-07-22 14:57:20.085548] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:00.707 [2024-07-22 14:57:20.085599] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:00.707 [2024-07-22 14:57:20.085622] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:00.707 [2024-07-22 14:57:20.085627] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:00.707 [2024-07-22 14:57:20.085632] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:00.707 [2024-07-22 14:57:20.085844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:44:00.707 [2024-07-22 14:57:20.086199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:44:00.707 [2024-07-22 14:57:20.086425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:44:00.707 [2024-07-22 14:57:20.086371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:44:01.277 14:57:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:44:01.277 14:57:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # return 0 00:44:01.277 14:57:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:44:01.277 14:57:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:01.277 14:57:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:44:01.277 14:57:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:01.277 14:57:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:44:01.277 14:57:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:01.277 14:57:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:44:01.277 [2024-07-22 14:57:20.759800] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:01.277 14:57:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:01.277 14:57:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:44:01.277 14:57:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:01.277 14:57:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:44:01.277 Malloc0 00:44:01.277 14:57:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:01.277 14:57:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:44:01.277 14:57:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:01.277 14:57:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:44:01.277 14:57:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:01.277 14:57:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:44:01.277 14:57:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:01.277 14:57:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:44:01.277 14:57:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:01.277 14:57:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:01.277 14:57:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:01.277 14:57:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:44:01.277 [2024-07-22 14:57:20.796760] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:01.277 14:57:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:01.277 14:57:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:44:01.277 14:57:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:44:01.277 14:57:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:44:01.277 14:57:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:44:01.277 14:57:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:44:01.277 14:57:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:44:01.277 { 00:44:01.277 "params": { 00:44:01.277 "name": "Nvme$subsystem", 00:44:01.277 "trtype": "$TEST_TRANSPORT", 00:44:01.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:01.277 "adrfam": "ipv4", 00:44:01.277 "trsvcid": "$NVMF_PORT", 00:44:01.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:01.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:01.277 "hdgst": ${hdgst:-false}, 00:44:01.277 "ddgst": ${ddgst:-false} 00:44:01.277 }, 00:44:01.277 "method": "bdev_nvme_attach_controller" 00:44:01.277 } 00:44:01.277 EOF 00:44:01.277 )") 00:44:01.277 14:57:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:44:01.277 14:57:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:44:01.277 14:57:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:44:01.277 14:57:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:44:01.277 "params": { 00:44:01.277 "name": "Nvme1", 00:44:01.277 "trtype": "tcp", 00:44:01.277 "traddr": "10.0.0.2", 00:44:01.277 "adrfam": "ipv4", 00:44:01.277 "trsvcid": "4420", 00:44:01.277 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:01.277 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:01.277 "hdgst": false, 00:44:01.277 "ddgst": false 00:44:01.277 }, 00:44:01.277 "method": "bdev_nvme_attach_controller" 00:44:01.277 }' 00:44:01.277 [2024-07-22 14:57:20.854549] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:44:01.277 [2024-07-22 14:57:20.854623] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid98142 ] 00:44:01.546 [2024-07-22 14:57:20.979480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:44:01.546 [2024-07-22 14:57:21.080079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:44:01.546 [2024-07-22 14:57:21.080265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:44:01.546 [2024-07-22 14:57:21.080268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:44:01.818 I/O targets: 00:44:01.818 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:44:01.818 00:44:01.818 00:44:01.818 CUnit - A unit testing framework for C - Version 2.1-3 00:44:01.818 http://cunit.sourceforge.net/ 00:44:01.818 00:44:01.818 00:44:01.818 Suite: bdevio tests on: Nvme1n1 00:44:01.818 Test: blockdev write read block ...passed 00:44:01.818 Test: blockdev write zeroes read block ...passed 00:44:01.818 Test: blockdev write zeroes read no split ...passed 00:44:01.818 Test: blockdev write zeroes read split ...passed 00:44:01.818 Test: blockdev write zeroes read split partial ...passed 00:44:01.818 Test: blockdev reset ...[2024-07-22 14:57:21.358742] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:44:01.818 [2024-07-22 14:57:21.358828] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81d240 (9): Bad file descriptor 00:44:01.818 [2024-07-22 14:57:21.369557] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:44:01.818 passed 00:44:01.818 Test: blockdev write read 8 blocks ...passed 00:44:01.818 Test: blockdev write read size > 128k ...passed 00:44:01.818 Test: blockdev write read invalid size ...passed 00:44:01.818 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:44:01.818 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:44:01.818 Test: blockdev write read max offset ...passed 00:44:02.077 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:44:02.077 Test: blockdev writev readv 8 blocks ...passed 00:44:02.077 Test: blockdev writev readv 30 x 1block ...passed 00:44:02.077 Test: blockdev writev readv block ...passed 00:44:02.077 Test: blockdev writev readv size > 128k ...passed 00:44:02.077 Test: blockdev writev readv size > 128k in two iovs ...passed 00:44:02.077 Test: blockdev comparev and writev ...[2024-07-22 14:57:21.542695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:44:02.077 [2024-07-22 14:57:21.542751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:44:02.077 [2024-07-22 14:57:21.542773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:44:02.077 [2024-07-22 14:57:21.542785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:44:02.077 [2024-07-22 14:57:21.543076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:44:02.077 [2024-07-22 14:57:21.543114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:44:02.077 [2024-07-22 14:57:21.543133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:44:02.077 [2024-07-22 14:57:21.543145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:44:02.077 [2024-07-22 14:57:21.543397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:44:02.077 [2024-07-22 14:57:21.543437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:44:02.077 [2024-07-22 14:57:21.543453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:44:02.077 [2024-07-22 14:57:21.543463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:44:02.077 [2024-07-22 14:57:21.543697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:44:02.077 [2024-07-22 14:57:21.543730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:44:02.077 [2024-07-22 14:57:21.543747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:44:02.077 [2024-07-22 14:57:21.543756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:44:02.077 passed 00:44:02.077 Test: blockdev nvme passthru rw ...passed 00:44:02.077 Test: blockdev nvme passthru vendor specific ...[2024-07-22 14:57:21.627985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:44:02.077 [2024-07-22 14:57:21.628012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:44:02.077 [2024-07-22 14:57:21.628091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:44:02.077 [2024-07-22 14:57:21.628100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:44:02.077 [2024-07-22 14:57:21.628190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:44:02.077 [2024-07-22 14:57:21.628203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:44:02.077 passed 00:44:02.077 Test: blockdev nvme admin passthru ...[2024-07-22 14:57:21.628286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:44:02.077 [2024-07-22 14:57:21.628301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:44:02.077 passed 00:44:02.077 Test: blockdev copy ...passed 00:44:02.077 00:44:02.077 Run Summary: Type Total Ran Passed Failed Inactive 00:44:02.077 suites 1 1 n/a 0 0 00:44:02.077 tests 23 23 23 0 0 00:44:02.077 asserts 152 152 152 0 n/a 00:44:02.077 00:44:02.077 Elapsed time = 0.939 seconds 00:44:02.336 14:57:21 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:02.336 14:57:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:02.336 14:57:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:44:02.595 14:57:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:02.595 14:57:21 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:44:02.595 14:57:21 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:44:02.595 14:57:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:44:02.595 14:57:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:44:02.595 14:57:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:44:02.595 14:57:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:44:02.595 14:57:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:44:02.595 14:57:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:44:02.595 rmmod nvme_tcp 00:44:02.595 rmmod nvme_fabrics 00:44:02.595 rmmod nvme_keyring 00:44:02.595 14:57:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:44:02.595 14:57:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:44:02.595 14:57:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:44:02.595 14:57:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 98087 ']' 00:44:02.595 14:57:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 98087 00:44:02.595 14:57:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # '[' -z 98087 ']' 00:44:02.595 14:57:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # kill -0 98087 00:44:02.595 14:57:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # uname 00:44:02.595 14:57:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:44:02.595 14:57:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 98087 00:44:02.595 killing process with pid 98087 00:44:02.595 14:57:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:44:02.595 14:57:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:44:02.595 14:57:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # echo 'killing process with pid 98087' 00:44:02.595 14:57:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # kill 98087 00:44:02.595 14:57:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # wait 98087 00:44:02.854 14:57:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:44:02.854 14:57:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:44:02.854 14:57:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:44:02.854 14:57:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:44:02.854 14:57:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:44:02.854 14:57:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:02.854 14:57:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:44:02.854 14:57:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:02.854 14:57:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:44:02.854 00:44:02.854 real 0m3.166s 00:44:02.854 user 0m11.004s 00:44:02.854 sys 0m1.161s 00:44:02.854 14:57:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # xtrace_disable 00:44:02.855 14:57:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:44:02.855 ************************************ 00:44:02.855 END TEST nvmf_bdevio_no_huge 00:44:02.855 ************************************ 00:44:03.113 14:57:22 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:44:03.113 14:57:22 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:44:03.113 14:57:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:44:03.113 14:57:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:03.113 ************************************ 00:44:03.113 START TEST nvmf_tls 00:44:03.113 ************************************ 00:44:03.113 14:57:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:44:03.113 * Looking for test storage... 00:44:03.113 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:44:03.113 14:57:22 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:44:03.113 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:44:03.113 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:03.113 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:03.113 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:03.113 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:03.113 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:03.113 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:03.113 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:03.113 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:03.113 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:03.113 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:03.113 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:44:03.113 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:44:03.113 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:03.113 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:03.113 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:44:03.113 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:03.113 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:44:03.113 14:57:22 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:03.113 14:57:22 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:03.113 14:57:22 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:03.113 14:57:22 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:03.114 14:57:22 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:03.114 14:57:22 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:03.114 14:57:22 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:44:03.114 14:57:22 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:03.114 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:44:03.114 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:44:03.114 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:44:03.114 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:03.114 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:03.114 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:03.114 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:44:03.114 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:44:03.114 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:44:03.114 14:57:22 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:44:03.114 14:57:22 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:44:03.114 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:44:03.114 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:03.114 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:44:03.114 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:44:03.114 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:44:03.114 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:03.114 14:57:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:44:03.114 14:57:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:03.114 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:44:03.114 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:44:03.114 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:44:03.114 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:44:03.114 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:44:03.114 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:44:03.114 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:03.114 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:03.114 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:44:03.114 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:44:03.114 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:44:03.114 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:44:03.114 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:44:03.114 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:03.114 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:44:03.114 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:44:03.114 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:44:03.114 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:44:03.114 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:44:03.114 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:44:03.114 Cannot find device "nvmf_tgt_br" 00:44:03.114 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:44:03.114 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:44:03.373 Cannot find device "nvmf_tgt_br2" 00:44:03.373 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:44:03.373 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:44:03.373 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:44:03.373 Cannot find device "nvmf_tgt_br" 00:44:03.373 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:44:03.373 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:44:03.373 Cannot find device "nvmf_tgt_br2" 00:44:03.373 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:44:03.373 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:44:03.373 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:44:03.373 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:44:03.373 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:44:03.373 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:44:03.373 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:44:03.373 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:44:03.373 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:44:03.373 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:44:03.373 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:44:03.373 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:44:03.373 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:44:03.373 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:44:03.373 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:44:03.373 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:44:03.373 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:44:03.373 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:44:03.373 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:44:03.373 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:44:03.373 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:44:03.373 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:44:03.373 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:44:03.373 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:44:03.373 14:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:44:03.631 14:57:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:44:03.631 14:57:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:44:03.632 14:57:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:44:03.632 14:57:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:44:03.632 14:57:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:44:03.632 14:57:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:44:03.632 14:57:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:44:03.632 14:57:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:44:03.632 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:03.632 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:44:03.632 00:44:03.632 --- 10.0.0.2 ping statistics --- 00:44:03.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:03.632 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:44:03.632 14:57:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:44:03.632 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:44:03.632 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:44:03.632 00:44:03.632 --- 10.0.0.3 ping statistics --- 00:44:03.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:03.632 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:44:03.632 14:57:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:44:03.632 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:03.632 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:44:03.632 00:44:03.632 --- 10.0.0.1 ping statistics --- 00:44:03.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:03.632 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:44:03.632 14:57:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:03.632 14:57:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:44:03.632 14:57:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:44:03.632 14:57:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:03.632 14:57:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:44:03.632 14:57:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:44:03.632 14:57:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:03.632 14:57:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:44:03.632 14:57:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:44:03.632 14:57:23 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:44:03.632 14:57:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:44:03.632 14:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:44:03.632 14:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:44:03.632 14:57:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=98329 00:44:03.632 14:57:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:44:03.632 14:57:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 98329 00:44:03.632 14:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 98329 ']' 00:44:03.632 14:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:03.632 14:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:44:03.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:03.632 14:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:03.632 14:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:44:03.632 14:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:44:03.632 [2024-07-22 14:57:23.156169] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:44:03.632 [2024-07-22 14:57:23.156225] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:03.890 [2024-07-22 14:57:23.295955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:03.890 [2024-07-22 14:57:23.346921] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:03.890 [2024-07-22 14:57:23.346968] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:03.890 [2024-07-22 14:57:23.346974] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:03.890 [2024-07-22 14:57:23.346979] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:03.890 [2024-07-22 14:57:23.346983] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:03.891 [2024-07-22 14:57:23.347003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:44:04.459 14:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:44:04.459 14:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:44:04.459 14:57:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:44:04.459 14:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:04.459 14:57:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:44:04.459 14:57:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:04.459 14:57:24 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:44:04.459 14:57:24 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:44:04.718 true 00:44:04.718 14:57:24 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:44:04.718 14:57:24 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:44:04.978 14:57:24 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:44:04.978 14:57:24 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:44:04.978 14:57:24 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:44:05.237 14:57:24 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:44:05.237 14:57:24 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:44:05.237 14:57:24 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:44:05.237 14:57:24 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:44:05.237 14:57:24 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:44:05.497 14:57:24 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:44:05.497 14:57:24 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:44:05.756 14:57:25 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:44:05.756 14:57:25 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:44:05.756 14:57:25 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:44:05.756 14:57:25 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:44:06.014 14:57:25 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:44:06.014 14:57:25 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:44:06.014 14:57:25 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:44:06.014 14:57:25 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:44:06.014 14:57:25 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:44:06.273 14:57:25 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:44:06.273 14:57:25 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:44:06.273 14:57:25 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:44:06.532 14:57:25 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:44:06.532 14:57:25 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:44:06.791 14:57:26 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:44:06.791 14:57:26 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:44:06.791 14:57:26 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:44:06.791 14:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:44:06.791 14:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:44:06.791 14:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:44:06.791 14:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:44:06.791 14:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:44:06.791 14:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:44:06.791 14:57:26 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:44:06.791 14:57:26 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:44:06.791 14:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:44:06.791 14:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:44:06.791 14:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:44:06.791 14:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:44:06.791 14:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:44:06.791 14:57:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:44:06.791 14:57:26 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:44:06.791 14:57:26 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:44:06.791 14:57:26 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.uXTrvpqFUj 00:44:06.791 14:57:26 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:44:06.791 14:57:26 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.ybU7h9gXr5 00:44:06.791 14:57:26 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:44:06.791 14:57:26 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:44:06.791 14:57:26 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.uXTrvpqFUj 00:44:06.791 14:57:26 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.ybU7h9gXr5 00:44:06.791 14:57:26 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:44:07.050 14:57:26 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:44:07.309 14:57:26 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.uXTrvpqFUj 00:44:07.309 14:57:26 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.uXTrvpqFUj 00:44:07.309 14:57:26 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:44:07.567 [2024-07-22 14:57:26.943700] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:07.567 14:57:26 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:44:07.568 14:57:27 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:44:07.826 [2024-07-22 14:57:27.327039] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:44:07.826 [2024-07-22 14:57:27.327223] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:07.826 14:57:27 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:44:08.085 malloc0 00:44:08.085 14:57:27 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:44:08.344 14:57:27 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.uXTrvpqFUj 00:44:08.344 [2024-07-22 14:57:27.882257] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:44:08.344 14:57:27 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.uXTrvpqFUj 00:44:20.550 Initializing NVMe Controllers 00:44:20.550 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:44:20.550 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:44:20.550 Initialization complete. Launching workers. 00:44:20.550 ======================================================== 00:44:20.550 Latency(us) 00:44:20.550 Device Information : IOPS MiB/s Average min max 00:44:20.550 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16020.89 62.58 3995.20 977.68 4955.01 00:44:20.550 ======================================================== 00:44:20.550 Total : 16020.89 62.58 3995.20 977.68 4955.01 00:44:20.550 00:44:20.550 14:57:38 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.uXTrvpqFUj 00:44:20.550 14:57:38 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:44:20.550 14:57:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:44:20.550 14:57:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:44:20.550 14:57:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.uXTrvpqFUj' 00:44:20.550 14:57:38 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:44:20.550 14:57:38 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:44:20.550 14:57:38 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=98670 00:44:20.550 14:57:38 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:44:20.550 14:57:38 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 98670 /var/tmp/bdevperf.sock 00:44:20.550 14:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 98670 ']' 00:44:20.550 14:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:44:20.550 14:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:44:20.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:44:20.550 14:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:44:20.550 14:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:44:20.550 14:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:44:20.550 [2024-07-22 14:57:38.105062] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:44:20.550 [2024-07-22 14:57:38.105139] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98670 ] 00:44:20.550 [2024-07-22 14:57:38.247064] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:20.550 [2024-07-22 14:57:38.300738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:44:20.550 14:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:44:20.550 14:57:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:44:20.551 14:57:38 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.uXTrvpqFUj 00:44:20.551 [2024-07-22 14:57:39.141291] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:20.551 [2024-07-22 14:57:39.141385] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:44:20.551 TLSTESTn1 00:44:20.551 14:57:39 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:44:20.551 Running I/O for 10 seconds... 00:44:30.547 00:44:30.547 Latency(us) 00:44:30.547 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:30.547 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:44:30.547 Verification LBA range: start 0x0 length 0x2000 00:44:30.547 TLSTESTn1 : 10.01 6412.60 25.05 0.00 0.00 19927.50 4206.90 18544.68 00:44:30.547 =================================================================================================================== 00:44:30.547 Total : 6412.60 25.05 0.00 0.00 19927.50 4206.90 18544.68 00:44:30.547 0 00:44:30.547 14:57:49 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:44:30.547 14:57:49 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 98670 00:44:30.547 14:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 98670 ']' 00:44:30.547 14:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 98670 00:44:30.547 14:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:44:30.547 14:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:44:30.547 14:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 98670 00:44:30.547 killing process with pid 98670 00:44:30.547 Received shutdown signal, test time was about 10.000000 seconds 00:44:30.547 00:44:30.547 Latency(us) 00:44:30.547 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:30.547 =================================================================================================================== 00:44:30.547 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:44:30.547 14:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:44:30.547 14:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:44:30.547 14:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 98670' 00:44:30.547 14:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 98670 00:44:30.547 [2024-07-22 14:57:49.360870] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:44:30.547 14:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 98670 00:44:30.547 14:57:49 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ybU7h9gXr5 00:44:30.547 14:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:44:30.547 14:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ybU7h9gXr5 00:44:30.547 14:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:44:30.547 14:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:44:30.547 14:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:44:30.547 14:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:44:30.547 14:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ybU7h9gXr5 00:44:30.547 14:57:49 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:44:30.547 14:57:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:44:30.547 14:57:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:44:30.547 14:57:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ybU7h9gXr5' 00:44:30.547 14:57:49 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:44:30.547 14:57:49 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=98818 00:44:30.547 14:57:49 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:44:30.547 14:57:49 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:44:30.547 14:57:49 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 98818 /var/tmp/bdevperf.sock 00:44:30.547 14:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 98818 ']' 00:44:30.547 14:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:44:30.547 14:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:44:30.547 14:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:44:30.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:44:30.547 14:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:44:30.547 14:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:44:30.547 [2024-07-22 14:57:49.601543] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:44:30.547 [2024-07-22 14:57:49.601712] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98818 ] 00:44:30.547 [2024-07-22 14:57:49.741027] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:30.547 [2024-07-22 14:57:49.791720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:44:31.115 14:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:44:31.115 14:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:44:31.115 14:57:50 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ybU7h9gXr5 00:44:31.115 [2024-07-22 14:57:50.683073] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:31.115 [2024-07-22 14:57:50.683170] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:44:31.115 [2024-07-22 14:57:50.693163] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:44:31.115 [2024-07-22 14:57:50.693429] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x206f440 (107): Transport endpoint is not connected 00:44:31.115 [2024-07-22 14:57:50.694420] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x206f440 (9): Bad file descriptor 00:44:31.115 [2024-07-22 14:57:50.695416] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:31.115 [2024-07-22 14:57:50.695431] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:44:31.115 [2024-07-22 14:57:50.695441] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:31.115 2024/07/22 14:57:50 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.ybU7h9gXr5 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:44:31.115 request: 00:44:31.115 { 00:44:31.115 "method": "bdev_nvme_attach_controller", 00:44:31.115 "params": { 00:44:31.115 "name": "TLSTEST", 00:44:31.115 "trtype": "tcp", 00:44:31.115 "traddr": "10.0.0.2", 00:44:31.115 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:31.115 "adrfam": "ipv4", 00:44:31.115 "trsvcid": "4420", 00:44:31.115 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:31.115 "psk": "/tmp/tmp.ybU7h9gXr5" 00:44:31.115 } 00:44:31.115 } 00:44:31.115 Got JSON-RPC error response 00:44:31.115 GoRPCClient: error on JSON-RPC call 00:44:31.115 14:57:50 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 98818 00:44:31.115 14:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 98818 ']' 00:44:31.115 14:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 98818 00:44:31.115 14:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:44:31.115 14:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:44:31.115 14:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 98818 00:44:31.375 killing process with pid 98818 00:44:31.375 Received shutdown signal, test time was about 10.000000 seconds 00:44:31.375 00:44:31.375 Latency(us) 00:44:31.375 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:31.375 =================================================================================================================== 00:44:31.375 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:44:31.375 14:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:44:31.375 14:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:44:31.375 14:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 98818' 00:44:31.375 14:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 98818 00:44:31.375 [2024-07-22 14:57:50.753987] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:44:31.375 14:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 98818 00:44:31.375 14:57:50 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:44:31.375 14:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:44:31.375 14:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:44:31.375 14:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:44:31.375 14:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:44:31.375 14:57:50 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.uXTrvpqFUj 00:44:31.375 14:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:44:31.375 14:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.uXTrvpqFUj 00:44:31.375 14:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:44:31.375 14:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:44:31.375 14:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:44:31.375 14:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:44:31.375 14:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.uXTrvpqFUj 00:44:31.375 14:57:50 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:44:31.375 14:57:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:44:31.375 14:57:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:44:31.375 14:57:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.uXTrvpqFUj' 00:44:31.375 14:57:50 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:44:31.375 14:57:50 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=98864 00:44:31.375 14:57:50 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:44:31.375 14:57:50 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 98864 /var/tmp/bdevperf.sock 00:44:31.375 14:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 98864 ']' 00:44:31.375 14:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:44:31.375 14:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:44:31.375 14:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:44:31.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:44:31.375 14:57:50 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:44:31.375 14:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:44:31.375 14:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:44:31.375 [2024-07-22 14:57:50.981436] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:44:31.375 [2024-07-22 14:57:50.981496] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98864 ] 00:44:31.634 [2024-07-22 14:57:51.119234] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:31.634 [2024-07-22 14:57:51.167849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:44:32.571 14:57:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:44:32.571 14:57:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:44:32.571 14:57:51 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.uXTrvpqFUj 00:44:32.571 [2024-07-22 14:57:51.998850] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:32.571 [2024-07-22 14:57:51.998958] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:44:32.571 [2024-07-22 14:57:52.003401] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:44:32.571 [2024-07-22 14:57:52.003436] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:44:32.571 [2024-07-22 14:57:52.003495] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:44:32.571 [2024-07-22 14:57:52.004175] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa2440 (107): Transport endpoint is not connected 00:44:32.571 [2024-07-22 14:57:52.005161] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa2440 (9): Bad file descriptor 00:44:32.572 [2024-07-22 14:57:52.006157] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:32.572 [2024-07-22 14:57:52.006178] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:44:32.572 [2024-07-22 14:57:52.006189] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:32.572 2024/07/22 14:57:52 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST psk:/tmp/tmp.uXTrvpqFUj subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:44:32.572 request: 00:44:32.572 { 00:44:32.572 "method": "bdev_nvme_attach_controller", 00:44:32.572 "params": { 00:44:32.572 "name": "TLSTEST", 00:44:32.572 "trtype": "tcp", 00:44:32.572 "traddr": "10.0.0.2", 00:44:32.572 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:44:32.572 "adrfam": "ipv4", 00:44:32.572 "trsvcid": "4420", 00:44:32.572 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:32.572 "psk": "/tmp/tmp.uXTrvpqFUj" 00:44:32.572 } 00:44:32.572 } 00:44:32.572 Got JSON-RPC error response 00:44:32.572 GoRPCClient: error on JSON-RPC call 00:44:32.572 14:57:52 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 98864 00:44:32.572 14:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 98864 ']' 00:44:32.572 14:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 98864 00:44:32.572 14:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:44:32.572 14:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:44:32.572 14:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 98864 00:44:32.572 killing process with pid 98864 00:44:32.572 Received shutdown signal, test time was about 10.000000 seconds 00:44:32.572 00:44:32.572 Latency(us) 00:44:32.572 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:32.572 =================================================================================================================== 00:44:32.572 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:44:32.572 14:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:44:32.572 14:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:44:32.572 14:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 98864' 00:44:32.572 14:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 98864 00:44:32.572 [2024-07-22 14:57:52.063868] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:44:32.572 14:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 98864 00:44:32.831 14:57:52 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:44:32.831 14:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:44:32.831 14:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:44:32.831 14:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:44:32.831 14:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:44:32.831 14:57:52 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.uXTrvpqFUj 00:44:32.831 14:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:44:32.831 14:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.uXTrvpqFUj 00:44:32.831 14:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:44:32.831 14:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:44:32.831 14:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:44:32.831 14:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:44:32.831 14:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.uXTrvpqFUj 00:44:32.831 14:57:52 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:44:32.831 14:57:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:44:32.831 14:57:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:44:32.831 14:57:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.uXTrvpqFUj' 00:44:32.831 14:57:52 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:44:32.831 14:57:52 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=98904 00:44:32.831 14:57:52 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:44:32.832 14:57:52 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 98904 /var/tmp/bdevperf.sock 00:44:32.832 14:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 98904 ']' 00:44:32.832 14:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:44:32.832 14:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:44:32.832 14:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:44:32.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:44:32.832 14:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:44:32.832 14:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:44:32.832 14:57:52 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:44:32.832 [2024-07-22 14:57:52.292244] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:44:32.832 [2024-07-22 14:57:52.292724] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98904 ] 00:44:32.832 [2024-07-22 14:57:52.432214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:33.091 [2024-07-22 14:57:52.478056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:44:33.692 14:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:44:33.692 14:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:44:33.693 14:57:53 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.uXTrvpqFUj 00:44:33.693 [2024-07-22 14:57:53.301083] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:33.693 [2024-07-22 14:57:53.301176] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:44:33.693 [2024-07-22 14:57:53.308283] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:44:33.693 [2024-07-22 14:57:53.308313] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:44:33.693 [2024-07-22 14:57:53.308352] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:44:33.693 [2024-07-22 14:57:53.308478] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2098440 (107): Transport endpoint is not connected 00:44:33.693 [2024-07-22 14:57:53.309466] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2098440 (9): Bad file descriptor 00:44:33.693 [2024-07-22 14:57:53.310463] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:44:33.693 [2024-07-22 14:57:53.310479] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:44:33.693 [2024-07-22 14:57:53.310488] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:44:33.693 2024/07/22 14:57:53 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.uXTrvpqFUj subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:44:33.693 request: 00:44:33.693 { 00:44:33.693 "method": "bdev_nvme_attach_controller", 00:44:33.693 "params": { 00:44:33.693 "name": "TLSTEST", 00:44:33.693 "trtype": "tcp", 00:44:33.693 "traddr": "10.0.0.2", 00:44:33.693 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:33.693 "adrfam": "ipv4", 00:44:33.693 "trsvcid": "4420", 00:44:33.693 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:44:33.693 "psk": "/tmp/tmp.uXTrvpqFUj" 00:44:33.693 } 00:44:33.693 } 00:44:33.693 Got JSON-RPC error response 00:44:33.693 GoRPCClient: error on JSON-RPC call 00:44:33.953 14:57:53 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 98904 00:44:33.953 14:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 98904 ']' 00:44:33.953 14:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 98904 00:44:33.953 14:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:44:33.953 14:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:44:33.953 14:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 98904 00:44:33.953 14:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:44:33.953 14:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:44:33.953 killing process with pid 98904 00:44:33.953 14:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 98904' 00:44:33.953 14:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 98904 00:44:33.953 Received shutdown signal, test time was about 10.000000 seconds 00:44:33.953 00:44:33.953 Latency(us) 00:44:33.953 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:33.953 =================================================================================================================== 00:44:33.953 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:44:33.953 [2024-07-22 14:57:53.358145] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:44:33.953 14:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 98904 00:44:33.953 14:57:53 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:44:33.953 14:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:44:33.953 14:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:44:33.953 14:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:44:33.953 14:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:44:33.953 14:57:53 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:44:33.953 14:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:44:33.953 14:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:44:33.953 14:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:44:33.953 14:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:44:33.953 14:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:44:33.953 14:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:44:33.953 14:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:44:33.953 14:57:53 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:44:33.953 14:57:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:44:33.953 14:57:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:44:33.953 14:57:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:44:33.953 14:57:53 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:44:33.953 14:57:53 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=98944 00:44:33.953 14:57:53 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:44:33.953 14:57:53 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:44:33.953 14:57:53 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 98944 /var/tmp/bdevperf.sock 00:44:33.953 14:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 98944 ']' 00:44:33.953 14:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:44:33.953 14:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:44:33.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:44:33.953 14:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:44:33.953 14:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:44:33.953 14:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:44:34.212 [2024-07-22 14:57:53.594553] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:44:34.212 [2024-07-22 14:57:53.594635] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98944 ] 00:44:34.212 [2024-07-22 14:57:53.721625] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:34.212 [2024-07-22 14:57:53.769078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:44:35.149 14:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:44:35.149 14:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:44:35.149 14:57:54 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:44:35.149 [2024-07-22 14:57:54.615964] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:44:35.149 [2024-07-22 14:57:54.618021] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f9caa0 (9): Bad file descriptor 00:44:35.149 [2024-07-22 14:57:54.619014] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:44:35.149 [2024-07-22 14:57:54.619029] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:44:35.149 [2024-07-22 14:57:54.619037] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:44:35.149 2024/07/22 14:57:54 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:44:35.149 request: 00:44:35.149 { 00:44:35.149 "method": "bdev_nvme_attach_controller", 00:44:35.149 "params": { 00:44:35.149 "name": "TLSTEST", 00:44:35.149 "trtype": "tcp", 00:44:35.149 "traddr": "10.0.0.2", 00:44:35.149 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:35.149 "adrfam": "ipv4", 00:44:35.149 "trsvcid": "4420", 00:44:35.149 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:44:35.149 } 00:44:35.149 } 00:44:35.149 Got JSON-RPC error response 00:44:35.149 GoRPCClient: error on JSON-RPC call 00:44:35.149 14:57:54 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 98944 00:44:35.149 14:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 98944 ']' 00:44:35.149 14:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 98944 00:44:35.149 14:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:44:35.149 14:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:44:35.149 14:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 98944 00:44:35.149 killing process with pid 98944 00:44:35.149 Received shutdown signal, test time was about 10.000000 seconds 00:44:35.149 00:44:35.149 Latency(us) 00:44:35.149 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:35.149 =================================================================================================================== 00:44:35.149 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:44:35.149 14:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:44:35.149 14:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:44:35.149 14:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 98944' 00:44:35.149 14:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 98944 00:44:35.149 14:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 98944 00:44:35.420 14:57:54 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:44:35.420 14:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:44:35.420 14:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:44:35.420 14:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:44:35.420 14:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:44:35.420 14:57:54 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 98329 00:44:35.420 14:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 98329 ']' 00:44:35.420 14:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 98329 00:44:35.420 14:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:44:35.420 14:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:44:35.420 14:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 98329 00:44:35.420 14:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:44:35.420 14:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:44:35.420 14:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 98329' 00:44:35.420 killing process with pid 98329 00:44:35.420 14:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 98329 00:44:35.420 [2024-07-22 14:57:54.892519] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:44:35.420 14:57:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 98329 00:44:35.679 14:57:55 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:44:35.679 14:57:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:44:35.680 14:57:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:44:35.680 14:57:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:44:35.680 14:57:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:44:35.680 14:57:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:44:35.680 14:57:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:44:35.680 14:57:55 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:44:35.680 14:57:55 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:44:35.680 14:57:55 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.cG9ne331uX 00:44:35.680 14:57:55 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:44:35.680 14:57:55 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.cG9ne331uX 00:44:35.680 14:57:55 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:44:35.680 14:57:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:44:35.680 14:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:44:35.680 14:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:44:35.680 14:57:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=99004 00:44:35.680 14:57:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:44:35.680 14:57:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 99004 00:44:35.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:35.680 14:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 99004 ']' 00:44:35.680 14:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:35.680 14:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:44:35.680 14:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:35.680 14:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:44:35.680 14:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:44:35.680 [2024-07-22 14:57:55.199855] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:44:35.680 [2024-07-22 14:57:55.199922] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:35.939 [2024-07-22 14:57:55.338038] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:35.939 [2024-07-22 14:57:55.384316] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:35.939 [2024-07-22 14:57:55.384471] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:35.939 [2024-07-22 14:57:55.384504] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:35.939 [2024-07-22 14:57:55.384528] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:35.939 [2024-07-22 14:57:55.384534] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:35.939 [2024-07-22 14:57:55.384560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:44:36.506 14:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:44:36.506 14:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:44:36.506 14:57:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:44:36.506 14:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:36.506 14:57:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:44:36.506 14:57:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:36.506 14:57:56 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.cG9ne331uX 00:44:36.506 14:57:56 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.cG9ne331uX 00:44:36.506 14:57:56 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:44:36.765 [2024-07-22 14:57:56.264227] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:36.765 14:57:56 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:44:37.024 14:57:56 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:44:37.024 [2024-07-22 14:57:56.651530] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:44:37.024 [2024-07-22 14:57:56.651716] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:37.291 14:57:56 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:44:37.291 malloc0 00:44:37.291 14:57:56 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:44:37.586 14:57:57 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.cG9ne331uX 00:44:37.845 [2024-07-22 14:57:57.230854] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:44:37.845 14:57:57 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cG9ne331uX 00:44:37.845 14:57:57 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:44:37.845 14:57:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:44:37.845 14:57:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:44:37.845 14:57:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.cG9ne331uX' 00:44:37.845 14:57:57 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:44:37.845 14:57:57 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:44:37.845 14:57:57 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=99097 00:44:37.845 14:57:57 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:44:37.845 14:57:57 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 99097 /var/tmp/bdevperf.sock 00:44:37.845 14:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 99097 ']' 00:44:37.845 14:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:44:37.845 14:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:44:37.845 14:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:44:37.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:44:37.845 14:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:44:37.845 14:57:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:44:37.845 [2024-07-22 14:57:57.285234] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:44:37.845 [2024-07-22 14:57:57.285298] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99097 ] 00:44:37.845 [2024-07-22 14:57:57.421837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:37.845 [2024-07-22 14:57:57.473843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:44:38.783 14:57:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:44:38.783 14:57:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:44:38.783 14:57:58 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.cG9ne331uX 00:44:38.783 [2024-07-22 14:57:58.332937] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:38.783 [2024-07-22 14:57:58.333037] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:44:38.783 TLSTESTn1 00:44:39.043 14:57:58 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:44:39.043 Running I/O for 10 seconds... 00:44:49.045 00:44:49.045 Latency(us) 00:44:49.045 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:49.045 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:44:49.045 Verification LBA range: start 0x0 length 0x2000 00:44:49.045 TLSTESTn1 : 10.01 6450.20 25.20 0.00 0.00 19810.49 5151.30 18888.10 00:44:49.045 =================================================================================================================== 00:44:49.045 Total : 6450.20 25.20 0.00 0.00 19810.49 5151.30 18888.10 00:44:49.045 0 00:44:49.045 14:58:08 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:44:49.045 14:58:08 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 99097 00:44:49.045 14:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 99097 ']' 00:44:49.045 14:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 99097 00:44:49.045 14:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:44:49.045 14:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:44:49.045 14:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 99097 00:44:49.045 killing process with pid 99097 00:44:49.045 Received shutdown signal, test time was about 10.000000 seconds 00:44:49.045 00:44:49.045 Latency(us) 00:44:49.045 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:49.045 =================================================================================================================== 00:44:49.045 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:44:49.045 14:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:44:49.045 14:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:44:49.045 14:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 99097' 00:44:49.045 14:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 99097 00:44:49.045 [2024-07-22 14:58:08.570405] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:44:49.045 14:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 99097 00:44:49.304 14:58:08 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.cG9ne331uX 00:44:49.304 14:58:08 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cG9ne331uX 00:44:49.305 14:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:44:49.305 14:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cG9ne331uX 00:44:49.305 14:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:44:49.305 14:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:44:49.305 14:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:44:49.305 14:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:44:49.305 14:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cG9ne331uX 00:44:49.305 14:58:08 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:44:49.305 14:58:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:44:49.305 14:58:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:44:49.305 14:58:08 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.cG9ne331uX' 00:44:49.305 14:58:08 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:44:49.305 14:58:08 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=99245 00:44:49.305 14:58:08 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:44:49.305 14:58:08 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:44:49.305 14:58:08 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 99245 /var/tmp/bdevperf.sock 00:44:49.305 14:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 99245 ']' 00:44:49.305 14:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:44:49.305 14:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:44:49.305 14:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:44:49.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:44:49.305 14:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:44:49.305 14:58:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:44:49.305 [2024-07-22 14:58:08.814018] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:44:49.305 [2024-07-22 14:58:08.814416] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99245 ] 00:44:49.564 [2024-07-22 14:58:08.952300] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:49.564 [2024-07-22 14:58:09.001187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:44:50.132 14:58:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:44:50.132 14:58:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:44:50.132 14:58:09 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.cG9ne331uX 00:44:50.392 [2024-07-22 14:58:09.812361] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:50.392 [2024-07-22 14:58:09.812447] bdev_nvme.c:6122:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:44:50.392 [2024-07-22 14:58:09.812455] bdev_nvme.c:6231:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.cG9ne331uX 00:44:50.392 2024/07/22 14:58:09 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.cG9ne331uX subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-1 Msg=Operation not permitted 00:44:50.392 request: 00:44:50.392 { 00:44:50.392 "method": "bdev_nvme_attach_controller", 00:44:50.392 "params": { 00:44:50.392 "name": "TLSTEST", 00:44:50.392 "trtype": "tcp", 00:44:50.392 "traddr": "10.0.0.2", 00:44:50.392 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:50.392 "adrfam": "ipv4", 00:44:50.392 "trsvcid": "4420", 00:44:50.392 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:50.392 "psk": "/tmp/tmp.cG9ne331uX" 00:44:50.392 } 00:44:50.392 } 00:44:50.392 Got JSON-RPC error response 00:44:50.392 GoRPCClient: error on JSON-RPC call 00:44:50.393 14:58:09 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 99245 00:44:50.393 14:58:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 99245 ']' 00:44:50.393 14:58:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 99245 00:44:50.393 14:58:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:44:50.393 14:58:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:44:50.393 14:58:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 99245 00:44:50.393 killing process with pid 99245 00:44:50.393 Received shutdown signal, test time was about 10.000000 seconds 00:44:50.393 00:44:50.393 Latency(us) 00:44:50.393 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:50.393 =================================================================================================================== 00:44:50.393 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:44:50.393 14:58:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:44:50.393 14:58:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:44:50.393 14:58:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 99245' 00:44:50.393 14:58:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 99245 00:44:50.393 14:58:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 99245 00:44:50.652 14:58:10 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:44:50.652 14:58:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:44:50.652 14:58:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:44:50.652 14:58:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:44:50.652 14:58:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:44:50.652 14:58:10 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 99004 00:44:50.652 14:58:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 99004 ']' 00:44:50.652 14:58:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 99004 00:44:50.652 14:58:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:44:50.652 14:58:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:44:50.652 14:58:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 99004 00:44:50.652 killing process with pid 99004 00:44:50.652 14:58:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:44:50.652 14:58:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:44:50.652 14:58:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 99004' 00:44:50.652 14:58:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 99004 00:44:50.652 [2024-07-22 14:58:10.070484] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:44:50.652 14:58:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 99004 00:44:50.652 14:58:10 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:44:50.652 14:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:44:50.652 14:58:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:44:50.652 14:58:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:44:50.652 14:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=99301 00:44:50.653 14:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:44:50.653 14:58:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 99301 00:44:50.653 14:58:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 99301 ']' 00:44:50.653 14:58:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:50.653 14:58:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:44:50.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:50.653 14:58:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:50.653 14:58:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:44:50.653 14:58:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:44:50.911 [2024-07-22 14:58:10.326904] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:44:50.911 [2024-07-22 14:58:10.326991] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:50.911 [2024-07-22 14:58:10.465772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:50.911 [2024-07-22 14:58:10.516711] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:50.911 [2024-07-22 14:58:10.516776] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:50.911 [2024-07-22 14:58:10.516783] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:50.911 [2024-07-22 14:58:10.516787] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:50.911 [2024-07-22 14:58:10.516792] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:50.911 [2024-07-22 14:58:10.516813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:44:51.848 14:58:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:44:51.848 14:58:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:44:51.848 14:58:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:44:51.848 14:58:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:51.848 14:58:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:44:51.848 14:58:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:51.848 14:58:11 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.cG9ne331uX 00:44:51.848 14:58:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:44:51.848 14:58:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.cG9ne331uX 00:44:51.848 14:58:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:44:51.848 14:58:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:44:51.848 14:58:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:44:51.848 14:58:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:44:51.848 14:58:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.cG9ne331uX 00:44:51.848 14:58:11 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.cG9ne331uX 00:44:51.848 14:58:11 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:44:51.848 [2024-07-22 14:58:11.388028] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:51.848 14:58:11 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:44:52.107 14:58:11 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:44:52.366 [2024-07-22 14:58:11.775332] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:44:52.366 [2024-07-22 14:58:11.775513] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:52.366 14:58:11 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:44:52.366 malloc0 00:44:52.626 14:58:12 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:44:52.626 14:58:12 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.cG9ne331uX 00:44:52.885 [2024-07-22 14:58:12.382706] tcp.c:3575:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:44:52.886 [2024-07-22 14:58:12.382740] tcp.c:3661:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:44:52.886 [2024-07-22 14:58:12.382765] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:44:52.886 2024/07/22 14:58:12 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/tmp/tmp.cG9ne331uX], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:44:52.886 request: 00:44:52.886 { 00:44:52.886 "method": "nvmf_subsystem_add_host", 00:44:52.886 "params": { 00:44:52.886 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:44:52.886 "host": "nqn.2016-06.io.spdk:host1", 00:44:52.886 "psk": "/tmp/tmp.cG9ne331uX" 00:44:52.886 } 00:44:52.886 } 00:44:52.886 Got JSON-RPC error response 00:44:52.886 GoRPCClient: error on JSON-RPC call 00:44:52.886 14:58:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:44:52.886 14:58:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:44:52.886 14:58:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:44:52.886 14:58:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:44:52.886 14:58:12 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 99301 00:44:52.886 14:58:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 99301 ']' 00:44:52.886 14:58:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 99301 00:44:52.886 14:58:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:44:52.886 14:58:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:44:52.886 14:58:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 99301 00:44:52.886 14:58:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:44:52.886 14:58:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:44:52.886 killing process with pid 99301 00:44:52.886 14:58:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 99301' 00:44:52.886 14:58:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 99301 00:44:52.886 14:58:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 99301 00:44:53.145 14:58:12 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.cG9ne331uX 00:44:53.145 14:58:12 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:44:53.145 14:58:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:44:53.145 14:58:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:44:53.145 14:58:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:44:53.145 14:58:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=99406 00:44:53.145 14:58:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:44:53.145 14:58:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 99406 00:44:53.145 14:58:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 99406 ']' 00:44:53.145 14:58:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:53.145 14:58:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:44:53.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:53.145 14:58:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:53.145 14:58:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:44:53.145 14:58:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:44:53.145 [2024-07-22 14:58:12.702222] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:44:53.145 [2024-07-22 14:58:12.702284] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:53.411 [2024-07-22 14:58:12.842163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:53.411 [2024-07-22 14:58:12.891181] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:53.411 [2024-07-22 14:58:12.891227] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:53.411 [2024-07-22 14:58:12.891233] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:53.411 [2024-07-22 14:58:12.891237] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:53.411 [2024-07-22 14:58:12.891241] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:53.411 [2024-07-22 14:58:12.891259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:44:53.990 14:58:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:44:53.990 14:58:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:44:53.990 14:58:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:44:53.990 14:58:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:53.990 14:58:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:44:53.990 14:58:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:53.990 14:58:13 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.cG9ne331uX 00:44:53.990 14:58:13 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.cG9ne331uX 00:44:53.990 14:58:13 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:44:54.250 [2024-07-22 14:58:13.770440] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:54.250 14:58:13 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:44:54.509 14:58:13 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:44:54.769 [2024-07-22 14:58:14.165757] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:44:54.769 [2024-07-22 14:58:14.165941] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:54.769 14:58:14 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:44:54.769 malloc0 00:44:54.769 14:58:14 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:44:55.028 14:58:14 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.cG9ne331uX 00:44:55.287 [2024-07-22 14:58:14.709582] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:44:55.287 14:58:14 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:44:55.287 14:58:14 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=99499 00:44:55.287 14:58:14 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:44:55.287 14:58:14 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 99499 /var/tmp/bdevperf.sock 00:44:55.287 14:58:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 99499 ']' 00:44:55.287 14:58:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:44:55.287 14:58:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:44:55.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:44:55.287 14:58:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:44:55.287 14:58:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:44:55.287 14:58:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:44:55.287 [2024-07-22 14:58:14.763718] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:44:55.287 [2024-07-22 14:58:14.763798] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99499 ] 00:44:55.287 [2024-07-22 14:58:14.900820] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:55.547 [2024-07-22 14:58:14.952408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:44:56.116 14:58:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:44:56.116 14:58:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:44:56.116 14:58:15 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.cG9ne331uX 00:44:56.376 [2024-07-22 14:58:15.800295] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:56.376 [2024-07-22 14:58:15.800386] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:44:56.376 TLSTESTn1 00:44:56.376 14:58:15 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:44:56.636 14:58:16 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:44:56.636 "subsystems": [ 00:44:56.636 { 00:44:56.636 "subsystem": "keyring", 00:44:56.636 "config": [] 00:44:56.636 }, 00:44:56.636 { 00:44:56.636 "subsystem": "iobuf", 00:44:56.636 "config": [ 00:44:56.636 { 00:44:56.636 "method": "iobuf_set_options", 00:44:56.636 "params": { 00:44:56.636 "large_bufsize": 135168, 00:44:56.636 "large_pool_count": 1024, 00:44:56.636 "small_bufsize": 8192, 00:44:56.636 "small_pool_count": 8192 00:44:56.636 } 00:44:56.636 } 00:44:56.636 ] 00:44:56.636 }, 00:44:56.636 { 00:44:56.636 "subsystem": "sock", 00:44:56.636 "config": [ 00:44:56.636 { 00:44:56.636 "method": "sock_set_default_impl", 00:44:56.636 "params": { 00:44:56.636 "impl_name": "posix" 00:44:56.636 } 00:44:56.636 }, 00:44:56.636 { 00:44:56.636 "method": "sock_impl_set_options", 00:44:56.636 "params": { 00:44:56.636 "enable_ktls": false, 00:44:56.636 "enable_placement_id": 0, 00:44:56.636 "enable_quickack": false, 00:44:56.636 "enable_recv_pipe": true, 00:44:56.636 "enable_zerocopy_send_client": false, 00:44:56.636 "enable_zerocopy_send_server": true, 00:44:56.636 "impl_name": "ssl", 00:44:56.636 "recv_buf_size": 4096, 00:44:56.636 "send_buf_size": 4096, 00:44:56.636 "tls_version": 0, 00:44:56.636 "zerocopy_threshold": 0 00:44:56.636 } 00:44:56.636 }, 00:44:56.636 { 00:44:56.636 "method": "sock_impl_set_options", 00:44:56.636 "params": { 00:44:56.636 "enable_ktls": false, 00:44:56.636 "enable_placement_id": 0, 00:44:56.636 "enable_quickack": false, 00:44:56.636 "enable_recv_pipe": true, 00:44:56.636 "enable_zerocopy_send_client": false, 00:44:56.636 "enable_zerocopy_send_server": true, 00:44:56.636 "impl_name": "posix", 00:44:56.636 "recv_buf_size": 2097152, 00:44:56.636 "send_buf_size": 2097152, 00:44:56.636 "tls_version": 0, 00:44:56.636 "zerocopy_threshold": 0 00:44:56.636 } 00:44:56.636 } 00:44:56.636 ] 00:44:56.636 }, 00:44:56.636 { 00:44:56.636 "subsystem": "vmd", 00:44:56.636 "config": [] 00:44:56.636 }, 00:44:56.636 { 00:44:56.636 "subsystem": "accel", 00:44:56.636 "config": [ 00:44:56.636 { 00:44:56.636 "method": "accel_set_options", 00:44:56.636 "params": { 00:44:56.636 "buf_count": 2048, 00:44:56.636 "large_cache_size": 16, 00:44:56.636 "sequence_count": 2048, 00:44:56.636 "small_cache_size": 128, 00:44:56.636 "task_count": 2048 00:44:56.636 } 00:44:56.636 } 00:44:56.636 ] 00:44:56.636 }, 00:44:56.636 { 00:44:56.636 "subsystem": "bdev", 00:44:56.636 "config": [ 00:44:56.636 { 00:44:56.636 "method": "bdev_set_options", 00:44:56.636 "params": { 00:44:56.636 "bdev_auto_examine": true, 00:44:56.636 "bdev_io_cache_size": 256, 00:44:56.636 "bdev_io_pool_size": 65535, 00:44:56.636 "iobuf_large_cache_size": 16, 00:44:56.636 "iobuf_small_cache_size": 128 00:44:56.636 } 00:44:56.636 }, 00:44:56.636 { 00:44:56.636 "method": "bdev_raid_set_options", 00:44:56.636 "params": { 00:44:56.636 "process_window_size_kb": 1024 00:44:56.636 } 00:44:56.636 }, 00:44:56.636 { 00:44:56.636 "method": "bdev_iscsi_set_options", 00:44:56.636 "params": { 00:44:56.636 "timeout_sec": 30 00:44:56.636 } 00:44:56.636 }, 00:44:56.636 { 00:44:56.636 "method": "bdev_nvme_set_options", 00:44:56.636 "params": { 00:44:56.637 "action_on_timeout": "none", 00:44:56.637 "allow_accel_sequence": false, 00:44:56.637 "arbitration_burst": 0, 00:44:56.637 "bdev_retry_count": 3, 00:44:56.637 "ctrlr_loss_timeout_sec": 0, 00:44:56.637 "delay_cmd_submit": true, 00:44:56.637 "dhchap_dhgroups": [ 00:44:56.637 "null", 00:44:56.637 "ffdhe2048", 00:44:56.637 "ffdhe3072", 00:44:56.637 "ffdhe4096", 00:44:56.637 "ffdhe6144", 00:44:56.637 "ffdhe8192" 00:44:56.637 ], 00:44:56.637 "dhchap_digests": [ 00:44:56.637 "sha256", 00:44:56.637 "sha384", 00:44:56.637 "sha512" 00:44:56.637 ], 00:44:56.637 "disable_auto_failback": false, 00:44:56.637 "fast_io_fail_timeout_sec": 0, 00:44:56.637 "generate_uuids": false, 00:44:56.637 "high_priority_weight": 0, 00:44:56.637 "io_path_stat": false, 00:44:56.637 "io_queue_requests": 0, 00:44:56.637 "keep_alive_timeout_ms": 10000, 00:44:56.637 "low_priority_weight": 0, 00:44:56.637 "medium_priority_weight": 0, 00:44:56.637 "nvme_adminq_poll_period_us": 10000, 00:44:56.637 "nvme_error_stat": false, 00:44:56.637 "nvme_ioq_poll_period_us": 0, 00:44:56.637 "rdma_cm_event_timeout_ms": 0, 00:44:56.637 "rdma_max_cq_size": 0, 00:44:56.637 "rdma_srq_size": 0, 00:44:56.637 "reconnect_delay_sec": 0, 00:44:56.637 "timeout_admin_us": 0, 00:44:56.637 "timeout_us": 0, 00:44:56.637 "transport_ack_timeout": 0, 00:44:56.637 "transport_retry_count": 4, 00:44:56.637 "transport_tos": 0 00:44:56.637 } 00:44:56.637 }, 00:44:56.637 { 00:44:56.637 "method": "bdev_nvme_set_hotplug", 00:44:56.637 "params": { 00:44:56.637 "enable": false, 00:44:56.637 "period_us": 100000 00:44:56.637 } 00:44:56.637 }, 00:44:56.637 { 00:44:56.637 "method": "bdev_malloc_create", 00:44:56.637 "params": { 00:44:56.637 "block_size": 4096, 00:44:56.637 "name": "malloc0", 00:44:56.637 "num_blocks": 8192, 00:44:56.637 "optimal_io_boundary": 0, 00:44:56.637 "physical_block_size": 4096, 00:44:56.637 "uuid": "c0974b0a-256d-4473-8b07-7fa323cedcdb" 00:44:56.637 } 00:44:56.637 }, 00:44:56.637 { 00:44:56.637 "method": "bdev_wait_for_examine" 00:44:56.637 } 00:44:56.637 ] 00:44:56.637 }, 00:44:56.637 { 00:44:56.637 "subsystem": "nbd", 00:44:56.637 "config": [] 00:44:56.637 }, 00:44:56.637 { 00:44:56.637 "subsystem": "scheduler", 00:44:56.637 "config": [ 00:44:56.637 { 00:44:56.637 "method": "framework_set_scheduler", 00:44:56.637 "params": { 00:44:56.637 "name": "static" 00:44:56.637 } 00:44:56.637 } 00:44:56.637 ] 00:44:56.637 }, 00:44:56.637 { 00:44:56.637 "subsystem": "nvmf", 00:44:56.637 "config": [ 00:44:56.637 { 00:44:56.637 "method": "nvmf_set_config", 00:44:56.637 "params": { 00:44:56.637 "admin_cmd_passthru": { 00:44:56.637 "identify_ctrlr": false 00:44:56.637 }, 00:44:56.637 "discovery_filter": "match_any" 00:44:56.637 } 00:44:56.637 }, 00:44:56.637 { 00:44:56.637 "method": "nvmf_set_max_subsystems", 00:44:56.637 "params": { 00:44:56.637 "max_subsystems": 1024 00:44:56.637 } 00:44:56.637 }, 00:44:56.637 { 00:44:56.637 "method": "nvmf_set_crdt", 00:44:56.637 "params": { 00:44:56.637 "crdt1": 0, 00:44:56.637 "crdt2": 0, 00:44:56.637 "crdt3": 0 00:44:56.637 } 00:44:56.637 }, 00:44:56.637 { 00:44:56.637 "method": "nvmf_create_transport", 00:44:56.637 "params": { 00:44:56.637 "abort_timeout_sec": 1, 00:44:56.637 "ack_timeout": 0, 00:44:56.637 "buf_cache_size": 4294967295, 00:44:56.637 "c2h_success": false, 00:44:56.637 "data_wr_pool_size": 0, 00:44:56.637 "dif_insert_or_strip": false, 00:44:56.637 "in_capsule_data_size": 4096, 00:44:56.637 "io_unit_size": 131072, 00:44:56.637 "max_aq_depth": 128, 00:44:56.637 "max_io_qpairs_per_ctrlr": 127, 00:44:56.637 "max_io_size": 131072, 00:44:56.637 "max_queue_depth": 128, 00:44:56.637 "num_shared_buffers": 511, 00:44:56.637 "sock_priority": 0, 00:44:56.637 "trtype": "TCP", 00:44:56.637 "zcopy": false 00:44:56.637 } 00:44:56.637 }, 00:44:56.637 { 00:44:56.637 "method": "nvmf_create_subsystem", 00:44:56.637 "params": { 00:44:56.637 "allow_any_host": false, 00:44:56.637 "ana_reporting": false, 00:44:56.637 "max_cntlid": 65519, 00:44:56.637 "max_namespaces": 10, 00:44:56.637 "min_cntlid": 1, 00:44:56.637 "model_number": "SPDK bdev Controller", 00:44:56.637 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:44:56.637 "serial_number": "SPDK00000000000001" 00:44:56.637 } 00:44:56.637 }, 00:44:56.637 { 00:44:56.637 "method": "nvmf_subsystem_add_host", 00:44:56.637 "params": { 00:44:56.637 "host": "nqn.2016-06.io.spdk:host1", 00:44:56.637 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:44:56.637 "psk": "/tmp/tmp.cG9ne331uX" 00:44:56.637 } 00:44:56.637 }, 00:44:56.637 { 00:44:56.637 "method": "nvmf_subsystem_add_ns", 00:44:56.637 "params": { 00:44:56.637 "namespace": { 00:44:56.637 "bdev_name": "malloc0", 00:44:56.637 "nguid": "C0974B0A256D44738B077FA323CEDCDB", 00:44:56.637 "no_auto_visible": false, 00:44:56.637 "nsid": 1, 00:44:56.637 "uuid": "c0974b0a-256d-4473-8b07-7fa323cedcdb" 00:44:56.637 }, 00:44:56.637 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:44:56.637 } 00:44:56.637 }, 00:44:56.637 { 00:44:56.637 "method": "nvmf_subsystem_add_listener", 00:44:56.637 "params": { 00:44:56.637 "listen_address": { 00:44:56.637 "adrfam": "IPv4", 00:44:56.637 "traddr": "10.0.0.2", 00:44:56.637 "trsvcid": "4420", 00:44:56.637 "trtype": "TCP" 00:44:56.637 }, 00:44:56.637 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:44:56.637 "secure_channel": true 00:44:56.637 } 00:44:56.637 } 00:44:56.637 ] 00:44:56.637 } 00:44:56.637 ] 00:44:56.637 }' 00:44:56.637 14:58:16 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:44:56.897 14:58:16 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:44:56.897 "subsystems": [ 00:44:56.897 { 00:44:56.897 "subsystem": "keyring", 00:44:56.897 "config": [] 00:44:56.897 }, 00:44:56.897 { 00:44:56.897 "subsystem": "iobuf", 00:44:56.897 "config": [ 00:44:56.897 { 00:44:56.897 "method": "iobuf_set_options", 00:44:56.897 "params": { 00:44:56.897 "large_bufsize": 135168, 00:44:56.897 "large_pool_count": 1024, 00:44:56.897 "small_bufsize": 8192, 00:44:56.897 "small_pool_count": 8192 00:44:56.897 } 00:44:56.897 } 00:44:56.897 ] 00:44:56.897 }, 00:44:56.897 { 00:44:56.897 "subsystem": "sock", 00:44:56.897 "config": [ 00:44:56.897 { 00:44:56.897 "method": "sock_set_default_impl", 00:44:56.897 "params": { 00:44:56.897 "impl_name": "posix" 00:44:56.897 } 00:44:56.897 }, 00:44:56.897 { 00:44:56.897 "method": "sock_impl_set_options", 00:44:56.897 "params": { 00:44:56.897 "enable_ktls": false, 00:44:56.898 "enable_placement_id": 0, 00:44:56.898 "enable_quickack": false, 00:44:56.898 "enable_recv_pipe": true, 00:44:56.898 "enable_zerocopy_send_client": false, 00:44:56.898 "enable_zerocopy_send_server": true, 00:44:56.898 "impl_name": "ssl", 00:44:56.898 "recv_buf_size": 4096, 00:44:56.898 "send_buf_size": 4096, 00:44:56.898 "tls_version": 0, 00:44:56.898 "zerocopy_threshold": 0 00:44:56.898 } 00:44:56.898 }, 00:44:56.898 { 00:44:56.898 "method": "sock_impl_set_options", 00:44:56.898 "params": { 00:44:56.898 "enable_ktls": false, 00:44:56.898 "enable_placement_id": 0, 00:44:56.898 "enable_quickack": false, 00:44:56.898 "enable_recv_pipe": true, 00:44:56.898 "enable_zerocopy_send_client": false, 00:44:56.898 "enable_zerocopy_send_server": true, 00:44:56.898 "impl_name": "posix", 00:44:56.898 "recv_buf_size": 2097152, 00:44:56.898 "send_buf_size": 2097152, 00:44:56.898 "tls_version": 0, 00:44:56.898 "zerocopy_threshold": 0 00:44:56.898 } 00:44:56.898 } 00:44:56.898 ] 00:44:56.898 }, 00:44:56.898 { 00:44:56.898 "subsystem": "vmd", 00:44:56.898 "config": [] 00:44:56.898 }, 00:44:56.898 { 00:44:56.898 "subsystem": "accel", 00:44:56.898 "config": [ 00:44:56.898 { 00:44:56.898 "method": "accel_set_options", 00:44:56.898 "params": { 00:44:56.898 "buf_count": 2048, 00:44:56.898 "large_cache_size": 16, 00:44:56.898 "sequence_count": 2048, 00:44:56.898 "small_cache_size": 128, 00:44:56.898 "task_count": 2048 00:44:56.898 } 00:44:56.898 } 00:44:56.898 ] 00:44:56.898 }, 00:44:56.898 { 00:44:56.898 "subsystem": "bdev", 00:44:56.898 "config": [ 00:44:56.898 { 00:44:56.898 "method": "bdev_set_options", 00:44:56.898 "params": { 00:44:56.898 "bdev_auto_examine": true, 00:44:56.898 "bdev_io_cache_size": 256, 00:44:56.898 "bdev_io_pool_size": 65535, 00:44:56.898 "iobuf_large_cache_size": 16, 00:44:56.898 "iobuf_small_cache_size": 128 00:44:56.898 } 00:44:56.898 }, 00:44:56.898 { 00:44:56.898 "method": "bdev_raid_set_options", 00:44:56.898 "params": { 00:44:56.898 "process_window_size_kb": 1024 00:44:56.898 } 00:44:56.898 }, 00:44:56.898 { 00:44:56.898 "method": "bdev_iscsi_set_options", 00:44:56.898 "params": { 00:44:56.898 "timeout_sec": 30 00:44:56.898 } 00:44:56.898 }, 00:44:56.898 { 00:44:56.898 "method": "bdev_nvme_set_options", 00:44:56.898 "params": { 00:44:56.898 "action_on_timeout": "none", 00:44:56.898 "allow_accel_sequence": false, 00:44:56.898 "arbitration_burst": 0, 00:44:56.898 "bdev_retry_count": 3, 00:44:56.898 "ctrlr_loss_timeout_sec": 0, 00:44:56.898 "delay_cmd_submit": true, 00:44:56.898 "dhchap_dhgroups": [ 00:44:56.898 "null", 00:44:56.898 "ffdhe2048", 00:44:56.898 "ffdhe3072", 00:44:56.898 "ffdhe4096", 00:44:56.898 "ffdhe6144", 00:44:56.898 "ffdhe8192" 00:44:56.898 ], 00:44:56.898 "dhchap_digests": [ 00:44:56.898 "sha256", 00:44:56.898 "sha384", 00:44:56.898 "sha512" 00:44:56.898 ], 00:44:56.898 "disable_auto_failback": false, 00:44:56.898 "fast_io_fail_timeout_sec": 0, 00:44:56.898 "generate_uuids": false, 00:44:56.898 "high_priority_weight": 0, 00:44:56.898 "io_path_stat": false, 00:44:56.898 "io_queue_requests": 512, 00:44:56.898 "keep_alive_timeout_ms": 10000, 00:44:56.898 "low_priority_weight": 0, 00:44:56.898 "medium_priority_weight": 0, 00:44:56.898 "nvme_adminq_poll_period_us": 10000, 00:44:56.898 "nvme_error_stat": false, 00:44:56.898 "nvme_ioq_poll_period_us": 0, 00:44:56.898 "rdma_cm_event_timeout_ms": 0, 00:44:56.898 "rdma_max_cq_size": 0, 00:44:56.898 "rdma_srq_size": 0, 00:44:56.898 "reconnect_delay_sec": 0, 00:44:56.898 "timeout_admin_us": 0, 00:44:56.898 "timeout_us": 0, 00:44:56.898 "transport_ack_timeout": 0, 00:44:56.898 "transport_retry_count": 4, 00:44:56.898 "transport_tos": 0 00:44:56.898 } 00:44:56.898 }, 00:44:56.898 { 00:44:56.898 "method": "bdev_nvme_attach_controller", 00:44:56.898 "params": { 00:44:56.898 "adrfam": "IPv4", 00:44:56.898 "ctrlr_loss_timeout_sec": 0, 00:44:56.898 "ddgst": false, 00:44:56.898 "fast_io_fail_timeout_sec": 0, 00:44:56.898 "hdgst": false, 00:44:56.898 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:56.898 "name": "TLSTEST", 00:44:56.898 "prchk_guard": false, 00:44:56.898 "prchk_reftag": false, 00:44:56.898 "psk": "/tmp/tmp.cG9ne331uX", 00:44:56.898 "reconnect_delay_sec": 0, 00:44:56.898 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:56.898 "traddr": "10.0.0.2", 00:44:56.898 "trsvcid": "4420", 00:44:56.898 "trtype": "TCP" 00:44:56.898 } 00:44:56.898 }, 00:44:56.898 { 00:44:56.898 "method": "bdev_nvme_set_hotplug", 00:44:56.898 "params": { 00:44:56.898 "enable": false, 00:44:56.898 "period_us": 100000 00:44:56.898 } 00:44:56.898 }, 00:44:56.898 { 00:44:56.898 "method": "bdev_wait_for_examine" 00:44:56.898 } 00:44:56.898 ] 00:44:56.898 }, 00:44:56.898 { 00:44:56.898 "subsystem": "nbd", 00:44:56.898 "config": [] 00:44:56.898 } 00:44:56.898 ] 00:44:56.898 }' 00:44:56.898 14:58:16 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 99499 00:44:56.898 14:58:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 99499 ']' 00:44:56.898 14:58:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 99499 00:44:56.898 14:58:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:44:56.898 14:58:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:44:56.898 14:58:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 99499 00:44:56.898 14:58:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:44:56.898 14:58:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:44:56.898 killing process with pid 99499 00:44:56.898 14:58:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 99499' 00:44:56.898 14:58:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 99499 00:44:56.898 Received shutdown signal, test time was about 10.000000 seconds 00:44:56.898 00:44:56.898 Latency(us) 00:44:56.898 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:56.898 =================================================================================================================== 00:44:56.898 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:44:56.898 [2024-07-22 14:58:16.474917] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:44:56.898 14:58:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 99499 00:44:57.158 14:58:16 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 99406 00:44:57.158 14:58:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 99406 ']' 00:44:57.158 14:58:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 99406 00:44:57.158 14:58:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:44:57.158 14:58:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:44:57.158 14:58:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 99406 00:44:57.158 14:58:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:44:57.158 14:58:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:44:57.158 killing process with pid 99406 00:44:57.158 14:58:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 99406' 00:44:57.158 14:58:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 99406 00:44:57.158 [2024-07-22 14:58:16.683946] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:44:57.158 14:58:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 99406 00:44:57.418 14:58:16 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:44:57.418 14:58:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:44:57.418 14:58:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:44:57.418 14:58:16 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:44:57.418 "subsystems": [ 00:44:57.418 { 00:44:57.418 "subsystem": "keyring", 00:44:57.418 "config": [] 00:44:57.418 }, 00:44:57.418 { 00:44:57.418 "subsystem": "iobuf", 00:44:57.418 "config": [ 00:44:57.418 { 00:44:57.418 "method": "iobuf_set_options", 00:44:57.418 "params": { 00:44:57.418 "large_bufsize": 135168, 00:44:57.418 "large_pool_count": 1024, 00:44:57.418 "small_bufsize": 8192, 00:44:57.418 "small_pool_count": 8192 00:44:57.418 } 00:44:57.418 } 00:44:57.418 ] 00:44:57.418 }, 00:44:57.418 { 00:44:57.418 "subsystem": "sock", 00:44:57.418 "config": [ 00:44:57.418 { 00:44:57.418 "method": "sock_set_default_impl", 00:44:57.418 "params": { 00:44:57.418 "impl_name": "posix" 00:44:57.418 } 00:44:57.418 }, 00:44:57.418 { 00:44:57.418 "method": "sock_impl_set_options", 00:44:57.418 "params": { 00:44:57.418 "enable_ktls": false, 00:44:57.418 "enable_placement_id": 0, 00:44:57.418 "enable_quickack": false, 00:44:57.418 "enable_recv_pipe": true, 00:44:57.418 "enable_zerocopy_send_client": false, 00:44:57.418 "enable_zerocopy_send_server": true, 00:44:57.418 "impl_name": "ssl", 00:44:57.418 "recv_buf_size": 4096, 00:44:57.418 "send_buf_size": 4096, 00:44:57.418 "tls_version": 0, 00:44:57.418 "zerocopy_threshold": 0 00:44:57.418 } 00:44:57.418 }, 00:44:57.418 { 00:44:57.418 "method": "sock_impl_set_options", 00:44:57.418 "params": { 00:44:57.418 "enable_ktls": false, 00:44:57.418 "enable_placement_id": 0, 00:44:57.418 "enable_quickack": false, 00:44:57.418 "enable_recv_pipe": true, 00:44:57.418 "enable_zerocopy_send_client": false, 00:44:57.418 "enable_zerocopy_send_server": true, 00:44:57.418 "impl_name": "posix", 00:44:57.418 "recv_buf_size": 2097152, 00:44:57.418 "send_buf_size": 2097152, 00:44:57.418 "tls_version": 0, 00:44:57.418 "zerocopy_threshold": 0 00:44:57.418 } 00:44:57.418 } 00:44:57.418 ] 00:44:57.418 }, 00:44:57.418 { 00:44:57.418 "subsystem": "vmd", 00:44:57.418 "config": [] 00:44:57.418 }, 00:44:57.418 { 00:44:57.418 "subsystem": "accel", 00:44:57.418 "config": [ 00:44:57.418 { 00:44:57.418 "method": "accel_set_options", 00:44:57.418 "params": { 00:44:57.418 "buf_count": 2048, 00:44:57.418 "large_cache_size": 16, 00:44:57.418 "sequence_count": 2048, 00:44:57.418 "small_cache_size": 128, 00:44:57.418 "task_count": 2048 00:44:57.418 } 00:44:57.418 } 00:44:57.418 ] 00:44:57.418 }, 00:44:57.418 { 00:44:57.418 "subsystem": "bdev", 00:44:57.418 "config": [ 00:44:57.418 { 00:44:57.418 "method": "bdev_set_options", 00:44:57.418 "params": { 00:44:57.419 "bdev_auto_examine": true, 00:44:57.419 "bdev_io_cache_size": 256, 00:44:57.419 "bdev_io_pool_size": 65535, 00:44:57.419 "iobuf_large_cache_size": 16, 00:44:57.419 "iobuf_small_cache_size": 128 00:44:57.419 } 00:44:57.419 }, 00:44:57.419 { 00:44:57.419 "method": "bdev_raid_set_options", 00:44:57.419 "params": { 00:44:57.419 "process_window_size_kb": 1024 00:44:57.419 } 00:44:57.419 }, 00:44:57.419 { 00:44:57.419 "method": "bdev_iscsi_set_options", 00:44:57.419 "params": { 00:44:57.419 "timeout_sec": 30 00:44:57.419 } 00:44:57.419 }, 00:44:57.419 { 00:44:57.419 "method": "bdev_nvme_set_options", 00:44:57.419 "params": { 00:44:57.419 "action_on_timeout": "none", 00:44:57.419 "allow_accel_sequence": false, 00:44:57.419 "arbitration_burst": 0, 00:44:57.419 "bdev_retry_count": 3, 00:44:57.419 "ctrlr_loss_timeout_sec": 0, 00:44:57.419 "delay_cmd_submit": true, 00:44:57.419 "dhchap_dhgroups": [ 00:44:57.419 "null", 00:44:57.419 "ffdhe2048", 00:44:57.419 "ffdhe3072", 00:44:57.419 "ffdhe4096", 00:44:57.419 "ffdhe6144", 00:44:57.419 "ffdhe8192" 00:44:57.419 ], 00:44:57.419 "dhchap_digests": [ 00:44:57.419 "sha256", 00:44:57.419 "sha384", 00:44:57.419 "sha512" 00:44:57.419 ], 00:44:57.419 "disable_auto_failback": false, 00:44:57.419 "fast_io_fail_timeout_sec": 0, 00:44:57.419 "generate_uuids": false, 00:44:57.419 "high_priority_weight": 0, 00:44:57.419 "io_path_stat": false, 00:44:57.419 "io_queue_requests": 0, 00:44:57.419 "keep_alive_timeout_ms": 10000, 00:44:57.419 "low_priority_weight": 0, 00:44:57.419 "medium_priority_weight": 0, 00:44:57.419 "nvme_adminq_poll_period_us": 10000, 00:44:57.419 "nvme_error_stat": false, 00:44:57.419 "nvme_ioq_poll_period_us": 0, 00:44:57.419 "rdma_cm_event_timeout_ms": 0, 00:44:57.419 "rdma_max_cq_size": 0, 00:44:57.419 "rdma_srq_size": 0, 00:44:57.419 "reconnect_delay_sec": 0, 00:44:57.419 "timeout_admin_us": 0, 00:44:57.419 "timeout_us": 0, 00:44:57.419 "transport_ack_timeout": 0, 00:44:57.419 "transport_retry_count": 4, 00:44:57.419 "transport_tos": 0 00:44:57.419 } 00:44:57.419 }, 00:44:57.419 { 00:44:57.419 "method": "bdev_nvme_set_hotplug", 00:44:57.419 "params": { 00:44:57.419 "enable": false, 00:44:57.419 "period_us": 100000 00:44:57.419 } 00:44:57.419 }, 00:44:57.419 { 00:44:57.419 "method": "bdev_malloc_create", 00:44:57.419 "params": { 00:44:57.419 "block_size": 4096, 00:44:57.419 "name": "malloc0", 00:44:57.419 "num_blocks": 8192, 00:44:57.419 "optimal_io_boundary": 0, 00:44:57.419 "physical_block_size": 4096, 00:44:57.419 "uuid": "c0974b0a-256d-4473-8b07-7fa323cedcdb" 00:44:57.419 } 00:44:57.419 }, 00:44:57.419 { 00:44:57.419 "method": "bdev_wait_for_examine" 00:44:57.419 } 00:44:57.419 ] 00:44:57.419 }, 00:44:57.419 { 00:44:57.419 "subsystem": "nbd", 00:44:57.419 "config": [] 00:44:57.419 }, 00:44:57.419 { 00:44:57.419 "subsystem": "scheduler", 00:44:57.419 "config": [ 00:44:57.419 { 00:44:57.419 "method": "framework_set_scheduler", 00:44:57.419 "params": { 00:44:57.419 "name": "static" 00:44:57.419 } 00:44:57.419 } 00:44:57.419 ] 00:44:57.419 }, 00:44:57.419 { 00:44:57.419 "subsystem": "nvmf", 00:44:57.419 "config": [ 00:44:57.419 { 00:44:57.419 "method": "nvmf_set_config", 00:44:57.419 "params": { 00:44:57.419 "admin_cmd_passthru": { 00:44:57.419 "identify_ctrlr": false 00:44:57.419 }, 00:44:57.419 "discovery_filter": "match_any" 00:44:57.419 } 00:44:57.419 }, 00:44:57.419 { 00:44:57.419 "method": "nvmf_set_max_subsystems", 00:44:57.419 "params": { 00:44:57.419 "max_subsystems": 1024 00:44:57.419 } 00:44:57.419 }, 00:44:57.419 { 00:44:57.419 "method": "nvmf_set_crdt", 00:44:57.419 "params": { 00:44:57.419 "crdt1": 0, 00:44:57.419 "crdt2": 0, 00:44:57.419 "crdt3": 0 00:44:57.419 } 00:44:57.419 }, 00:44:57.419 { 00:44:57.419 "method": "nvmf_create_transport", 00:44:57.419 "params": { 00:44:57.419 "abort_timeout_sec": 1, 00:44:57.419 "ack_timeout": 0, 00:44:57.419 "buf_cache_size": 4294967295, 00:44:57.419 "c2h_success": false, 00:44:57.419 "data_wr_pool_size": 0, 00:44:57.419 "dif_insert_or_strip": false, 00:44:57.419 "in_capsule_data_size": 4096, 00:44:57.419 "io_unit_size": 131072, 00:44:57.419 "max_aq_depth": 128, 00:44:57.419 "max_io_qpairs_per_ctrlr": 127, 00:44:57.419 "max_io_size": 131072, 00:44:57.419 "max_queue_depth": 128, 00:44:57.419 "num_shared_buffers": 511, 00:44:57.419 "sock_priority": 0, 00:44:57.419 "trtype": "TCP", 00:44:57.419 "zcopy": false 00:44:57.419 } 00:44:57.419 }, 00:44:57.419 { 00:44:57.419 "method": "nvmf_create_subsystem", 00:44:57.419 "params": { 00:44:57.419 "allow_any_host": false, 00:44:57.419 "ana_reporting": false, 00:44:57.419 "max_cntlid": 65519, 00:44:57.419 "max_namespaces": 10, 00:44:57.419 "min_cntlid": 1, 00:44:57.419 "model_number": "SPDK bdev Controller", 00:44:57.419 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:44:57.419 "serial_number": "SPDK00000000000001" 00:44:57.419 } 00:44:57.419 }, 00:44:57.419 { 00:44:57.419 "method": "nvmf_subsystem_add_host", 00:44:57.419 "params": { 00:44:57.419 "host": "nqn.2016-06.io.spdk:host1", 00:44:57.419 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:44:57.419 "psk": "/tmp/tmp.cG9ne331uX" 00:44:57.419 } 00:44:57.419 }, 00:44:57.419 { 00:44:57.419 "method": "nvmf_subsystem_add_ns", 00:44:57.419 "params": { 00:44:57.419 "namespace": { 00:44:57.419 "bdev_name": "malloc0", 00:44:57.419 "nguid": "C0974B0A256D44738B077FA323CEDCDB", 00:44:57.419 "no_auto_visible": false, 00:44:57.419 "nsid": 1, 00:44:57.419 "uuid": "c0974b0a-256d-4473-8b07-7fa323cedcdb" 00:44:57.419 }, 00:44:57.419 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:44:57.419 } 00:44:57.419 }, 00:44:57.419 { 00:44:57.419 "method": "nvmf_subsystem_add_listener", 00:44:57.419 "params": { 00:44:57.419 "listen_address": { 00:44:57.419 "adrfam": "IPv4", 00:44:57.419 "traddr": "10.0.0.2", 00:44:57.419 "trsvcid": "4420", 00:44:57.419 "trtype": "TCP" 00:44:57.419 }, 00:44:57.419 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:44:57.419 "secure_channel": true 00:44:57.419 } 00:44:57.419 } 00:44:57.419 ] 00:44:57.419 } 00:44:57.419 ] 00:44:57.419 }' 00:44:57.419 14:58:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:44:57.419 14:58:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=99572 00:44:57.419 14:58:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:44:57.419 14:58:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 99572 00:44:57.419 14:58:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 99572 ']' 00:44:57.419 14:58:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:57.419 14:58:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:44:57.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:57.419 14:58:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:57.419 14:58:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:44:57.419 14:58:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:44:57.419 [2024-07-22 14:58:16.938996] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:44:57.419 [2024-07-22 14:58:16.939069] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:57.679 [2024-07-22 14:58:17.068782] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:57.679 [2024-07-22 14:58:17.120341] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:57.679 [2024-07-22 14:58:17.120386] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:57.679 [2024-07-22 14:58:17.120392] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:57.679 [2024-07-22 14:58:17.120397] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:57.679 [2024-07-22 14:58:17.120401] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:57.679 [2024-07-22 14:58:17.120468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:44:57.939 [2024-07-22 14:58:17.318304] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:57.939 [2024-07-22 14:58:17.334234] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:44:57.939 [2024-07-22 14:58:17.350184] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:44:57.939 [2024-07-22 14:58:17.350354] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:58.202 14:58:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:44:58.202 14:58:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:44:58.202 14:58:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:44:58.202 14:58:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:58.202 14:58:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:44:58.202 14:58:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:58.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:44:58.476 14:58:17 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=99616 00:44:58.476 14:58:17 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 99616 /var/tmp/bdevperf.sock 00:44:58.476 14:58:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 99616 ']' 00:44:58.476 14:58:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:44:58.476 14:58:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:44:58.476 14:58:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:44:58.476 14:58:17 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:44:58.476 14:58:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:44:58.476 14:58:17 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:44:58.476 "subsystems": [ 00:44:58.476 { 00:44:58.476 "subsystem": "keyring", 00:44:58.476 "config": [] 00:44:58.476 }, 00:44:58.476 { 00:44:58.476 "subsystem": "iobuf", 00:44:58.476 "config": [ 00:44:58.476 { 00:44:58.476 "method": "iobuf_set_options", 00:44:58.476 "params": { 00:44:58.476 "large_bufsize": 135168, 00:44:58.476 "large_pool_count": 1024, 00:44:58.476 "small_bufsize": 8192, 00:44:58.476 "small_pool_count": 8192 00:44:58.476 } 00:44:58.476 } 00:44:58.476 ] 00:44:58.476 }, 00:44:58.476 { 00:44:58.476 "subsystem": "sock", 00:44:58.476 "config": [ 00:44:58.476 { 00:44:58.476 "method": "sock_set_default_impl", 00:44:58.476 "params": { 00:44:58.476 "impl_name": "posix" 00:44:58.476 } 00:44:58.476 }, 00:44:58.476 { 00:44:58.476 "method": "sock_impl_set_options", 00:44:58.476 "params": { 00:44:58.476 "enable_ktls": false, 00:44:58.476 "enable_placement_id": 0, 00:44:58.476 "enable_quickack": false, 00:44:58.476 "enable_recv_pipe": true, 00:44:58.476 "enable_zerocopy_send_client": false, 00:44:58.476 "enable_zerocopy_send_server": true, 00:44:58.476 "impl_name": "ssl", 00:44:58.476 "recv_buf_size": 4096, 00:44:58.476 "send_buf_size": 4096, 00:44:58.476 "tls_version": 0, 00:44:58.476 "zerocopy_threshold": 0 00:44:58.476 } 00:44:58.476 }, 00:44:58.476 { 00:44:58.476 "method": "sock_impl_set_options", 00:44:58.476 "params": { 00:44:58.476 "enable_ktls": false, 00:44:58.476 "enable_placement_id": 0, 00:44:58.476 "enable_quickack": false, 00:44:58.476 "enable_recv_pipe": true, 00:44:58.476 "enable_zerocopy_send_client": false, 00:44:58.476 "enable_zerocopy_send_server": true, 00:44:58.476 "impl_name": "posix", 00:44:58.476 "recv_buf_size": 2097152, 00:44:58.476 "send_buf_size": 2097152, 00:44:58.476 "tls_version": 0, 00:44:58.476 "zerocopy_threshold": 0 00:44:58.476 } 00:44:58.476 } 00:44:58.476 ] 00:44:58.476 }, 00:44:58.476 { 00:44:58.476 "subsystem": "vmd", 00:44:58.476 "config": [] 00:44:58.476 }, 00:44:58.476 { 00:44:58.476 "subsystem": "accel", 00:44:58.476 "config": [ 00:44:58.476 { 00:44:58.476 "method": "accel_set_options", 00:44:58.476 "params": { 00:44:58.476 "buf_count": 2048, 00:44:58.476 "large_cache_size": 16, 00:44:58.476 "sequence_count": 2048, 00:44:58.476 "small_cache_size": 128, 00:44:58.476 "task_count": 2048 00:44:58.476 } 00:44:58.476 } 00:44:58.476 ] 00:44:58.476 }, 00:44:58.476 { 00:44:58.476 "subsystem": "bdev", 00:44:58.476 "config": [ 00:44:58.476 { 00:44:58.476 "method": "bdev_set_options", 00:44:58.476 "params": { 00:44:58.476 "bdev_auto_examine": true, 00:44:58.476 "bdev_io_cache_size": 256, 00:44:58.476 "bdev_io_pool_size": 65535, 00:44:58.476 "iobuf_large_cache_size": 16, 00:44:58.476 "iobuf_small_cache_size": 128 00:44:58.476 } 00:44:58.476 }, 00:44:58.476 { 00:44:58.476 "method": "bdev_raid_set_options", 00:44:58.476 "params": { 00:44:58.476 "process_window_size_kb": 1024 00:44:58.476 } 00:44:58.476 }, 00:44:58.476 { 00:44:58.476 "method": "bdev_iscsi_set_options", 00:44:58.476 "params": { 00:44:58.476 "timeout_sec": 30 00:44:58.476 } 00:44:58.476 }, 00:44:58.476 { 00:44:58.476 "method": "bdev_nvme_set_options", 00:44:58.476 "params": { 00:44:58.476 "action_on_timeout": "none", 00:44:58.476 "allow_accel_sequence": false, 00:44:58.476 "arbitration_burst": 0, 00:44:58.476 "bdev_retry_count": 3, 00:44:58.476 "ctrlr_loss_timeout_sec": 0, 00:44:58.476 "delay_cmd_submit": true, 00:44:58.476 "dhchap_dhgroups": [ 00:44:58.476 "null", 00:44:58.476 "ffdhe2048", 00:44:58.476 "ffdhe3072", 00:44:58.476 "ffdhe4096", 00:44:58.476 "ffdhe6144", 00:44:58.476 "ffdhe8192" 00:44:58.476 ], 00:44:58.476 "dhchap_digests": [ 00:44:58.476 "sha256", 00:44:58.476 "sha384", 00:44:58.476 "sha512" 00:44:58.476 ], 00:44:58.476 "disable_auto_failback": false, 00:44:58.476 "fast_io_fail_timeout_sec": 0, 00:44:58.476 "generate_uuids": false, 00:44:58.476 "high_priority_weight": 0, 00:44:58.476 "io_path_stat": false, 00:44:58.476 "io_queue_requests": 512, 00:44:58.476 "keep_alive_timeout_ms": 10000, 00:44:58.476 "low_priority_weight": 0, 00:44:58.476 "medium_priority_weight": 0, 00:44:58.476 "nvme_adminq_poll_period_us": 10000, 00:44:58.476 "nvme_error_stat": false, 00:44:58.476 "nvme_ioq_poll_period_us": 0, 00:44:58.476 "rdma_cm_event_timeout_ms": 0, 00:44:58.476 "rdma_max_cq_size": 0, 00:44:58.476 "rdma_srq_size": 0, 00:44:58.476 "reconnect_delay_sec": 0, 00:44:58.476 "timeout_admin_us": 0, 00:44:58.476 "timeout_us": 0, 00:44:58.476 "transport_ack_timeout": 0, 00:44:58.476 "transport 14:58:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:44:58.476 _retry_count": 4, 00:44:58.476 "transport_tos": 0 00:44:58.476 } 00:44:58.476 }, 00:44:58.476 { 00:44:58.476 "method": "bdev_nvme_attach_controller", 00:44:58.476 "params": { 00:44:58.476 "adrfam": "IPv4", 00:44:58.476 "ctrlr_loss_timeout_sec": 0, 00:44:58.476 "ddgst": false, 00:44:58.476 "fast_io_fail_timeout_sec": 0, 00:44:58.476 "hdgst": false, 00:44:58.476 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:58.476 "name": "TLSTEST", 00:44:58.476 "prchk_guard": false, 00:44:58.476 "prchk_reftag": false, 00:44:58.476 "psk": "/tmp/tmp.cG9ne331uX", 00:44:58.476 "reconnect_delay_sec": 0, 00:44:58.476 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:58.476 "traddr": "10.0.0.2", 00:44:58.476 "trsvcid": "4420", 00:44:58.476 "trtype": "TCP" 00:44:58.476 } 00:44:58.476 }, 00:44:58.476 { 00:44:58.476 "method": "bdev_nvme_set_hotplug", 00:44:58.476 "params": { 00:44:58.476 "enable": false, 00:44:58.476 "period_us": 100000 00:44:58.476 } 00:44:58.476 }, 00:44:58.476 { 00:44:58.476 "method": "bdev_wait_for_examine" 00:44:58.476 } 00:44:58.476 ] 00:44:58.476 }, 00:44:58.476 { 00:44:58.476 "subsystem": "nbd", 00:44:58.476 "config": [] 00:44:58.476 } 00:44:58.477 ] 00:44:58.477 }' 00:44:58.477 [2024-07-22 14:58:17.880473] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:44:58.477 [2024-07-22 14:58:17.880628] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99616 ] 00:44:58.477 [2024-07-22 14:58:18.020245] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:58.477 [2024-07-22 14:58:18.070315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:44:58.736 [2024-07-22 14:58:18.207901] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:58.736 [2024-07-22 14:58:18.207992] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:44:59.304 14:58:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:44:59.304 14:58:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:44:59.304 14:58:18 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:44:59.304 Running I/O for 10 seconds... 00:45:09.322 00:45:09.322 Latency(us) 00:45:09.322 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:09.322 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:45:09.322 Verification LBA range: start 0x0 length 0x2000 00:45:09.322 TLSTESTn1 : 10.01 6462.32 25.24 0.00 0.00 19774.99 4321.37 15224.96 00:45:09.322 =================================================================================================================== 00:45:09.322 Total : 6462.32 25.24 0.00 0.00 19774.99 4321.37 15224.96 00:45:09.322 0 00:45:09.322 14:58:28 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:45:09.322 14:58:28 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 99616 00:45:09.322 14:58:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 99616 ']' 00:45:09.322 14:58:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 99616 00:45:09.322 14:58:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:45:09.322 14:58:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:45:09.322 14:58:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 99616 00:45:09.322 14:58:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:45:09.322 14:58:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:45:09.322 killing process with pid 99616 00:45:09.322 14:58:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 99616' 00:45:09.322 14:58:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 99616 00:45:09.322 Received shutdown signal, test time was about 10.000000 seconds 00:45:09.322 00:45:09.322 Latency(us) 00:45:09.322 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:09.322 =================================================================================================================== 00:45:09.322 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:45:09.322 [2024-07-22 14:58:28.869070] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:45:09.322 14:58:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 99616 00:45:09.580 14:58:29 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 99572 00:45:09.580 14:58:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 99572 ']' 00:45:09.580 14:58:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 99572 00:45:09.580 14:58:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:45:09.580 14:58:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:45:09.580 14:58:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 99572 00:45:09.580 14:58:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:45:09.580 14:58:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:45:09.580 killing process with pid 99572 00:45:09.580 14:58:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 99572' 00:45:09.580 14:58:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 99572 00:45:09.580 [2024-07-22 14:58:29.090506] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:45:09.580 14:58:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 99572 00:45:09.872 14:58:29 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:45:09.872 14:58:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:45:09.872 14:58:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:45:09.872 14:58:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:45:09.872 14:58:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=99761 00:45:09.872 14:58:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:45:09.872 14:58:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 99761 00:45:09.872 14:58:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 99761 ']' 00:45:09.872 14:58:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:09.872 14:58:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:45:09.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:09.872 14:58:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:09.872 14:58:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:45:09.872 14:58:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:45:09.872 [2024-07-22 14:58:29.344923] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:45:09.872 [2024-07-22 14:58:29.344997] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:09.872 [2024-07-22 14:58:29.470381] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:10.137 [2024-07-22 14:58:29.517768] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:10.137 [2024-07-22 14:58:29.517819] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:10.137 [2024-07-22 14:58:29.517824] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:10.137 [2024-07-22 14:58:29.517829] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:10.137 [2024-07-22 14:58:29.517833] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:10.137 [2024-07-22 14:58:29.517856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:45:10.704 14:58:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:45:10.704 14:58:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:45:10.704 14:58:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:45:10.704 14:58:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:45:10.704 14:58:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:45:10.704 14:58:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:10.704 14:58:30 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.cG9ne331uX 00:45:10.704 14:58:30 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.cG9ne331uX 00:45:10.704 14:58:30 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:45:10.962 [2024-07-22 14:58:30.417967] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:10.962 14:58:30 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:45:11.221 14:58:30 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:45:11.221 [2024-07-22 14:58:30.769314] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:45:11.221 [2024-07-22 14:58:30.769480] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:11.221 14:58:30 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:45:11.479 malloc0 00:45:11.479 14:58:30 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:45:11.738 14:58:31 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.cG9ne331uX 00:45:11.738 [2024-07-22 14:58:31.340964] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:45:11.738 14:58:31 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=99858 00:45:11.738 14:58:31 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:45:11.738 14:58:31 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:45:11.738 14:58:31 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 99858 /var/tmp/bdevperf.sock 00:45:11.738 14:58:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 99858 ']' 00:45:11.738 14:58:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:45:11.738 14:58:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:45:11.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:45:11.738 14:58:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:45:11.738 14:58:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:45:11.738 14:58:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:45:11.997 [2024-07-22 14:58:31.414298] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:45:11.997 [2024-07-22 14:58:31.414703] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99858 ] 00:45:11.997 [2024-07-22 14:58:31.553743] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:11.997 [2024-07-22 14:58:31.599575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:45:12.932 14:58:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:45:12.932 14:58:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:45:12.932 14:58:32 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.cG9ne331uX 00:45:12.932 14:58:32 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:45:13.190 [2024-07-22 14:58:32.578454] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:45:13.190 nvme0n1 00:45:13.190 14:58:32 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:45:13.190 Running I/O for 1 seconds... 00:45:14.567 00:45:14.567 Latency(us) 00:45:14.567 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:14.567 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:45:14.567 Verification LBA range: start 0x0 length 0x2000 00:45:14.567 nvme0n1 : 1.01 6562.89 25.64 0.00 0.00 19366.79 4178.28 18773.63 00:45:14.567 =================================================================================================================== 00:45:14.567 Total : 6562.89 25.64 0.00 0.00 19366.79 4178.28 18773.63 00:45:14.567 0 00:45:14.567 14:58:33 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 99858 00:45:14.567 14:58:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 99858 ']' 00:45:14.567 14:58:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 99858 00:45:14.567 14:58:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:45:14.567 14:58:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:45:14.568 14:58:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 99858 00:45:14.568 14:58:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:45:14.568 14:58:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:45:14.568 killing process with pid 99858 00:45:14.568 14:58:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 99858' 00:45:14.568 14:58:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 99858 00:45:14.568 Received shutdown signal, test time was about 1.000000 seconds 00:45:14.568 00:45:14.568 Latency(us) 00:45:14.568 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:14.568 =================================================================================================================== 00:45:14.568 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:45:14.568 14:58:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 99858 00:45:14.568 14:58:34 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 99761 00:45:14.568 14:58:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 99761 ']' 00:45:14.568 14:58:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 99761 00:45:14.568 14:58:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:45:14.568 14:58:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:45:14.568 14:58:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 99761 00:45:14.568 14:58:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:45:14.568 14:58:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:45:14.568 killing process with pid 99761 00:45:14.568 14:58:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 99761' 00:45:14.568 14:58:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 99761 00:45:14.568 [2024-07-22 14:58:34.061280] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:45:14.568 14:58:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 99761 00:45:14.826 14:58:34 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:45:14.826 14:58:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:45:14.826 14:58:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:45:14.826 14:58:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:45:14.826 14:58:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=99928 00:45:14.826 14:58:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:45:14.826 14:58:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 99928 00:45:14.826 14:58:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 99928 ']' 00:45:14.826 14:58:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:14.826 14:58:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:45:14.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:14.826 14:58:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:14.826 14:58:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:45:14.826 14:58:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:45:14.826 [2024-07-22 14:58:34.303306] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:45:14.826 [2024-07-22 14:58:34.303363] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:14.826 [2024-07-22 14:58:34.427257] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:15.085 [2024-07-22 14:58:34.476641] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:15.085 [2024-07-22 14:58:34.476701] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:15.085 [2024-07-22 14:58:34.476707] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:15.085 [2024-07-22 14:58:34.476712] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:15.085 [2024-07-22 14:58:34.476716] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:15.085 [2024-07-22 14:58:34.476735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:45:15.653 14:58:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:45:15.653 14:58:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:45:15.653 14:58:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:45:15.653 14:58:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:45:15.653 14:58:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:45:15.653 14:58:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:15.653 14:58:35 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:45:15.653 14:58:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:15.653 14:58:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:45:15.653 [2024-07-22 14:58:35.224775] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:15.653 malloc0 00:45:15.653 [2024-07-22 14:58:35.253122] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:45:15.653 [2024-07-22 14:58:35.253295] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:15.653 14:58:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:15.912 14:58:35 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=99978 00:45:15.912 14:58:35 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:45:15.912 14:58:35 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 99978 /var/tmp/bdevperf.sock 00:45:15.912 14:58:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 99978 ']' 00:45:15.912 14:58:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:45:15.912 14:58:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:45:15.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:45:15.912 14:58:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:45:15.912 14:58:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:45:15.912 14:58:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:45:15.912 [2024-07-22 14:58:35.324297] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:45:15.912 [2024-07-22 14:58:35.324364] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99978 ] 00:45:15.912 [2024-07-22 14:58:35.460482] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:15.912 [2024-07-22 14:58:35.513010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:45:16.847 14:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:45:16.847 14:58:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:45:16.847 14:58:36 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.cG9ne331uX 00:45:16.847 14:58:36 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:45:17.105 [2024-07-22 14:58:36.536955] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:45:17.105 nvme0n1 00:45:17.105 14:58:36 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:45:17.105 Running I/O for 1 seconds... 00:45:18.479 00:45:18.479 Latency(us) 00:45:18.479 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:18.479 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:45:18.479 Verification LBA range: start 0x0 length 0x2000 00:45:18.479 nvme0n1 : 1.01 6492.83 25.36 0.00 0.00 19575.02 4178.28 16598.64 00:45:18.479 =================================================================================================================== 00:45:18.479 Total : 6492.83 25.36 0.00 0.00 19575.02 4178.28 16598.64 00:45:18.479 0 00:45:18.479 14:58:37 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:45:18.479 14:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:18.479 14:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:45:18.479 14:58:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:18.479 14:58:37 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:45:18.479 "subsystems": [ 00:45:18.479 { 00:45:18.479 "subsystem": "keyring", 00:45:18.479 "config": [ 00:45:18.479 { 00:45:18.479 "method": "keyring_file_add_key", 00:45:18.479 "params": { 00:45:18.479 "name": "key0", 00:45:18.479 "path": "/tmp/tmp.cG9ne331uX" 00:45:18.479 } 00:45:18.479 } 00:45:18.479 ] 00:45:18.479 }, 00:45:18.479 { 00:45:18.479 "subsystem": "iobuf", 00:45:18.479 "config": [ 00:45:18.479 { 00:45:18.479 "method": "iobuf_set_options", 00:45:18.479 "params": { 00:45:18.479 "large_bufsize": 135168, 00:45:18.479 "large_pool_count": 1024, 00:45:18.479 "small_bufsize": 8192, 00:45:18.479 "small_pool_count": 8192 00:45:18.479 } 00:45:18.479 } 00:45:18.479 ] 00:45:18.479 }, 00:45:18.479 { 00:45:18.479 "subsystem": "sock", 00:45:18.479 "config": [ 00:45:18.479 { 00:45:18.479 "method": "sock_set_default_impl", 00:45:18.479 "params": { 00:45:18.479 "impl_name": "posix" 00:45:18.479 } 00:45:18.479 }, 00:45:18.479 { 00:45:18.479 "method": "sock_impl_set_options", 00:45:18.479 "params": { 00:45:18.479 "enable_ktls": false, 00:45:18.479 "enable_placement_id": 0, 00:45:18.479 "enable_quickack": false, 00:45:18.479 "enable_recv_pipe": true, 00:45:18.479 "enable_zerocopy_send_client": false, 00:45:18.479 "enable_zerocopy_send_server": true, 00:45:18.479 "impl_name": "ssl", 00:45:18.479 "recv_buf_size": 4096, 00:45:18.479 "send_buf_size": 4096, 00:45:18.479 "tls_version": 0, 00:45:18.479 "zerocopy_threshold": 0 00:45:18.479 } 00:45:18.479 }, 00:45:18.479 { 00:45:18.479 "method": "sock_impl_set_options", 00:45:18.479 "params": { 00:45:18.479 "enable_ktls": false, 00:45:18.479 "enable_placement_id": 0, 00:45:18.479 "enable_quickack": false, 00:45:18.479 "enable_recv_pipe": true, 00:45:18.479 "enable_zerocopy_send_client": false, 00:45:18.479 "enable_zerocopy_send_server": true, 00:45:18.479 "impl_name": "posix", 00:45:18.479 "recv_buf_size": 2097152, 00:45:18.479 "send_buf_size": 2097152, 00:45:18.479 "tls_version": 0, 00:45:18.479 "zerocopy_threshold": 0 00:45:18.479 } 00:45:18.479 } 00:45:18.479 ] 00:45:18.479 }, 00:45:18.479 { 00:45:18.479 "subsystem": "vmd", 00:45:18.479 "config": [] 00:45:18.479 }, 00:45:18.479 { 00:45:18.479 "subsystem": "accel", 00:45:18.479 "config": [ 00:45:18.479 { 00:45:18.479 "method": "accel_set_options", 00:45:18.479 "params": { 00:45:18.479 "buf_count": 2048, 00:45:18.479 "large_cache_size": 16, 00:45:18.479 "sequence_count": 2048, 00:45:18.479 "small_cache_size": 128, 00:45:18.479 "task_count": 2048 00:45:18.479 } 00:45:18.479 } 00:45:18.479 ] 00:45:18.479 }, 00:45:18.479 { 00:45:18.479 "subsystem": "bdev", 00:45:18.479 "config": [ 00:45:18.479 { 00:45:18.479 "method": "bdev_set_options", 00:45:18.479 "params": { 00:45:18.479 "bdev_auto_examine": true, 00:45:18.479 "bdev_io_cache_size": 256, 00:45:18.479 "bdev_io_pool_size": 65535, 00:45:18.479 "iobuf_large_cache_size": 16, 00:45:18.479 "iobuf_small_cache_size": 128 00:45:18.479 } 00:45:18.479 }, 00:45:18.479 { 00:45:18.479 "method": "bdev_raid_set_options", 00:45:18.479 "params": { 00:45:18.479 "process_window_size_kb": 1024 00:45:18.479 } 00:45:18.479 }, 00:45:18.479 { 00:45:18.479 "method": "bdev_iscsi_set_options", 00:45:18.479 "params": { 00:45:18.479 "timeout_sec": 30 00:45:18.479 } 00:45:18.479 }, 00:45:18.479 { 00:45:18.479 "method": "bdev_nvme_set_options", 00:45:18.479 "params": { 00:45:18.479 "action_on_timeout": "none", 00:45:18.479 "allow_accel_sequence": false, 00:45:18.479 "arbitration_burst": 0, 00:45:18.479 "bdev_retry_count": 3, 00:45:18.479 "ctrlr_loss_timeout_sec": 0, 00:45:18.479 "delay_cmd_submit": true, 00:45:18.479 "dhchap_dhgroups": [ 00:45:18.479 "null", 00:45:18.479 "ffdhe2048", 00:45:18.479 "ffdhe3072", 00:45:18.479 "ffdhe4096", 00:45:18.479 "ffdhe6144", 00:45:18.479 "ffdhe8192" 00:45:18.479 ], 00:45:18.479 "dhchap_digests": [ 00:45:18.479 "sha256", 00:45:18.479 "sha384", 00:45:18.479 "sha512" 00:45:18.479 ], 00:45:18.479 "disable_auto_failback": false, 00:45:18.479 "fast_io_fail_timeout_sec": 0, 00:45:18.479 "generate_uuids": false, 00:45:18.479 "high_priority_weight": 0, 00:45:18.479 "io_path_stat": false, 00:45:18.479 "io_queue_requests": 0, 00:45:18.479 "keep_alive_timeout_ms": 10000, 00:45:18.479 "low_priority_weight": 0, 00:45:18.479 "medium_priority_weight": 0, 00:45:18.479 "nvme_adminq_poll_period_us": 10000, 00:45:18.479 "nvme_error_stat": false, 00:45:18.479 "nvme_ioq_poll_period_us": 0, 00:45:18.479 "rdma_cm_event_timeout_ms": 0, 00:45:18.479 "rdma_max_cq_size": 0, 00:45:18.479 "rdma_srq_size": 0, 00:45:18.479 "reconnect_delay_sec": 0, 00:45:18.479 "timeout_admin_us": 0, 00:45:18.479 "timeout_us": 0, 00:45:18.479 "transport_ack_timeout": 0, 00:45:18.479 "transport_retry_count": 4, 00:45:18.479 "transport_tos": 0 00:45:18.479 } 00:45:18.479 }, 00:45:18.479 { 00:45:18.479 "method": "bdev_nvme_set_hotplug", 00:45:18.479 "params": { 00:45:18.479 "enable": false, 00:45:18.479 "period_us": 100000 00:45:18.479 } 00:45:18.479 }, 00:45:18.479 { 00:45:18.479 "method": "bdev_malloc_create", 00:45:18.479 "params": { 00:45:18.479 "block_size": 4096, 00:45:18.479 "name": "malloc0", 00:45:18.479 "num_blocks": 8192, 00:45:18.479 "optimal_io_boundary": 0, 00:45:18.479 "physical_block_size": 4096, 00:45:18.479 "uuid": "55122779-c738-4ea9-88ca-4c5d6249f5e0" 00:45:18.479 } 00:45:18.479 }, 00:45:18.479 { 00:45:18.479 "method": "bdev_wait_for_examine" 00:45:18.479 } 00:45:18.479 ] 00:45:18.479 }, 00:45:18.479 { 00:45:18.479 "subsystem": "nbd", 00:45:18.479 "config": [] 00:45:18.479 }, 00:45:18.479 { 00:45:18.479 "subsystem": "scheduler", 00:45:18.480 "config": [ 00:45:18.480 { 00:45:18.480 "method": "framework_set_scheduler", 00:45:18.480 "params": { 00:45:18.480 "name": "static" 00:45:18.480 } 00:45:18.480 } 00:45:18.480 ] 00:45:18.480 }, 00:45:18.480 { 00:45:18.480 "subsystem": "nvmf", 00:45:18.480 "config": [ 00:45:18.480 { 00:45:18.480 "method": "nvmf_set_config", 00:45:18.480 "params": { 00:45:18.480 "admin_cmd_passthru": { 00:45:18.480 "identify_ctrlr": false 00:45:18.480 }, 00:45:18.480 "discovery_filter": "match_any" 00:45:18.480 } 00:45:18.480 }, 00:45:18.480 { 00:45:18.480 "method": "nvmf_set_max_subsystems", 00:45:18.480 "params": { 00:45:18.480 "max_subsystems": 1024 00:45:18.480 } 00:45:18.480 }, 00:45:18.480 { 00:45:18.480 "method": "nvmf_set_crdt", 00:45:18.480 "params": { 00:45:18.480 "crdt1": 0, 00:45:18.480 "crdt2": 0, 00:45:18.480 "crdt3": 0 00:45:18.480 } 00:45:18.480 }, 00:45:18.480 { 00:45:18.480 "method": "nvmf_create_transport", 00:45:18.480 "params": { 00:45:18.480 "abort_timeout_sec": 1, 00:45:18.480 "ack_timeout": 0, 00:45:18.480 "buf_cache_size": 4294967295, 00:45:18.480 "c2h_success": false, 00:45:18.480 "data_wr_pool_size": 0, 00:45:18.480 "dif_insert_or_strip": false, 00:45:18.480 "in_capsule_data_size": 4096, 00:45:18.480 "io_unit_size": 131072, 00:45:18.480 "max_aq_depth": 128, 00:45:18.480 "max_io_qpairs_per_ctrlr": 127, 00:45:18.480 "max_io_size": 131072, 00:45:18.480 "max_queue_depth": 128, 00:45:18.480 "num_shared_buffers": 511, 00:45:18.480 "sock_priority": 0, 00:45:18.480 "trtype": "TCP", 00:45:18.480 "zcopy": false 00:45:18.480 } 00:45:18.480 }, 00:45:18.480 { 00:45:18.480 "method": "nvmf_create_subsystem", 00:45:18.480 "params": { 00:45:18.480 "allow_any_host": false, 00:45:18.480 "ana_reporting": false, 00:45:18.480 "max_cntlid": 65519, 00:45:18.480 "max_namespaces": 32, 00:45:18.480 "min_cntlid": 1, 00:45:18.480 "model_number": "SPDK bdev Controller", 00:45:18.480 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:45:18.480 "serial_number": "00000000000000000000" 00:45:18.480 } 00:45:18.480 }, 00:45:18.480 { 00:45:18.480 "method": "nvmf_subsystem_add_host", 00:45:18.480 "params": { 00:45:18.480 "host": "nqn.2016-06.io.spdk:host1", 00:45:18.480 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:45:18.480 "psk": "key0" 00:45:18.480 } 00:45:18.480 }, 00:45:18.480 { 00:45:18.480 "method": "nvmf_subsystem_add_ns", 00:45:18.480 "params": { 00:45:18.480 "namespace": { 00:45:18.480 "bdev_name": "malloc0", 00:45:18.480 "nguid": "55122779C7384EA988CA4C5D6249F5E0", 00:45:18.480 "no_auto_visible": false, 00:45:18.480 "nsid": 1, 00:45:18.480 "uuid": "55122779-c738-4ea9-88ca-4c5d6249f5e0" 00:45:18.480 }, 00:45:18.480 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:45:18.480 } 00:45:18.480 }, 00:45:18.480 { 00:45:18.480 "method": "nvmf_subsystem_add_listener", 00:45:18.480 "params": { 00:45:18.480 "listen_address": { 00:45:18.480 "adrfam": "IPv4", 00:45:18.480 "traddr": "10.0.0.2", 00:45:18.480 "trsvcid": "4420", 00:45:18.480 "trtype": "TCP" 00:45:18.480 }, 00:45:18.480 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:45:18.480 "secure_channel": true 00:45:18.480 } 00:45:18.480 } 00:45:18.480 ] 00:45:18.480 } 00:45:18.480 ] 00:45:18.480 }' 00:45:18.480 14:58:37 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:45:18.738 14:58:38 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:45:18.738 "subsystems": [ 00:45:18.738 { 00:45:18.738 "subsystem": "keyring", 00:45:18.738 "config": [ 00:45:18.738 { 00:45:18.738 "method": "keyring_file_add_key", 00:45:18.738 "params": { 00:45:18.738 "name": "key0", 00:45:18.738 "path": "/tmp/tmp.cG9ne331uX" 00:45:18.738 } 00:45:18.738 } 00:45:18.738 ] 00:45:18.738 }, 00:45:18.738 { 00:45:18.738 "subsystem": "iobuf", 00:45:18.738 "config": [ 00:45:18.738 { 00:45:18.738 "method": "iobuf_set_options", 00:45:18.738 "params": { 00:45:18.738 "large_bufsize": 135168, 00:45:18.738 "large_pool_count": 1024, 00:45:18.738 "small_bufsize": 8192, 00:45:18.738 "small_pool_count": 8192 00:45:18.738 } 00:45:18.738 } 00:45:18.738 ] 00:45:18.738 }, 00:45:18.738 { 00:45:18.738 "subsystem": "sock", 00:45:18.738 "config": [ 00:45:18.738 { 00:45:18.738 "method": "sock_set_default_impl", 00:45:18.738 "params": { 00:45:18.738 "impl_name": "posix" 00:45:18.738 } 00:45:18.738 }, 00:45:18.738 { 00:45:18.738 "method": "sock_impl_set_options", 00:45:18.738 "params": { 00:45:18.738 "enable_ktls": false, 00:45:18.738 "enable_placement_id": 0, 00:45:18.738 "enable_quickack": false, 00:45:18.738 "enable_recv_pipe": true, 00:45:18.738 "enable_zerocopy_send_client": false, 00:45:18.738 "enable_zerocopy_send_server": true, 00:45:18.738 "impl_name": "ssl", 00:45:18.738 "recv_buf_size": 4096, 00:45:18.738 "send_buf_size": 4096, 00:45:18.738 "tls_version": 0, 00:45:18.738 "zerocopy_threshold": 0 00:45:18.738 } 00:45:18.738 }, 00:45:18.738 { 00:45:18.738 "method": "sock_impl_set_options", 00:45:18.738 "params": { 00:45:18.738 "enable_ktls": false, 00:45:18.738 "enable_placement_id": 0, 00:45:18.738 "enable_quickack": false, 00:45:18.738 "enable_recv_pipe": true, 00:45:18.738 "enable_zerocopy_send_client": false, 00:45:18.738 "enable_zerocopy_send_server": true, 00:45:18.738 "impl_name": "posix", 00:45:18.738 "recv_buf_size": 2097152, 00:45:18.738 "send_buf_size": 2097152, 00:45:18.738 "tls_version": 0, 00:45:18.738 "zerocopy_threshold": 0 00:45:18.738 } 00:45:18.738 } 00:45:18.738 ] 00:45:18.738 }, 00:45:18.738 { 00:45:18.738 "subsystem": "vmd", 00:45:18.738 "config": [] 00:45:18.738 }, 00:45:18.738 { 00:45:18.738 "subsystem": "accel", 00:45:18.738 "config": [ 00:45:18.738 { 00:45:18.738 "method": "accel_set_options", 00:45:18.738 "params": { 00:45:18.738 "buf_count": 2048, 00:45:18.738 "large_cache_size": 16, 00:45:18.738 "sequence_count": 2048, 00:45:18.738 "small_cache_size": 128, 00:45:18.738 "task_count": 2048 00:45:18.738 } 00:45:18.738 } 00:45:18.738 ] 00:45:18.738 }, 00:45:18.738 { 00:45:18.738 "subsystem": "bdev", 00:45:18.738 "config": [ 00:45:18.738 { 00:45:18.738 "method": "bdev_set_options", 00:45:18.738 "params": { 00:45:18.738 "bdev_auto_examine": true, 00:45:18.738 "bdev_io_cache_size": 256, 00:45:18.738 "bdev_io_pool_size": 65535, 00:45:18.738 "iobuf_large_cache_size": 16, 00:45:18.738 "iobuf_small_cache_size": 128 00:45:18.738 } 00:45:18.738 }, 00:45:18.738 { 00:45:18.739 "method": "bdev_raid_set_options", 00:45:18.739 "params": { 00:45:18.739 "process_window_size_kb": 1024 00:45:18.739 } 00:45:18.739 }, 00:45:18.739 { 00:45:18.739 "method": "bdev_iscsi_set_options", 00:45:18.739 "params": { 00:45:18.739 "timeout_sec": 30 00:45:18.739 } 00:45:18.739 }, 00:45:18.739 { 00:45:18.739 "method": "bdev_nvme_set_options", 00:45:18.739 "params": { 00:45:18.739 "action_on_timeout": "none", 00:45:18.739 "allow_accel_sequence": false, 00:45:18.739 "arbitration_burst": 0, 00:45:18.739 "bdev_retry_count": 3, 00:45:18.739 "ctrlr_loss_timeout_sec": 0, 00:45:18.739 "delay_cmd_submit": true, 00:45:18.739 "dhchap_dhgroups": [ 00:45:18.739 "null", 00:45:18.739 "ffdhe2048", 00:45:18.739 "ffdhe3072", 00:45:18.739 "ffdhe4096", 00:45:18.739 "ffdhe6144", 00:45:18.739 "ffdhe8192" 00:45:18.739 ], 00:45:18.739 "dhchap_digests": [ 00:45:18.739 "sha256", 00:45:18.739 "sha384", 00:45:18.739 "sha512" 00:45:18.739 ], 00:45:18.739 "disable_auto_failback": false, 00:45:18.739 "fast_io_fail_timeout_sec": 0, 00:45:18.739 "generate_uuids": false, 00:45:18.739 "high_priority_weight": 0, 00:45:18.739 "io_path_stat": false, 00:45:18.739 "io_queue_requests": 512, 00:45:18.739 "keep_alive_timeout_ms": 10000, 00:45:18.739 "low_priority_weight": 0, 00:45:18.739 "medium_priority_weight": 0, 00:45:18.739 "nvme_adminq_poll_period_us": 10000, 00:45:18.739 "nvme_error_stat": false, 00:45:18.739 "nvme_ioq_poll_period_us": 0, 00:45:18.739 "rdma_cm_event_timeout_ms": 0, 00:45:18.739 "rdma_max_cq_size": 0, 00:45:18.739 "rdma_srq_size": 0, 00:45:18.739 "reconnect_delay_sec": 0, 00:45:18.739 "timeout_admin_us": 0, 00:45:18.739 "timeout_us": 0, 00:45:18.739 "transport_ack_timeout": 0, 00:45:18.739 "transport_retry_count": 4, 00:45:18.739 "transport_tos": 0 00:45:18.739 } 00:45:18.739 }, 00:45:18.739 { 00:45:18.739 "method": "bdev_nvme_attach_controller", 00:45:18.739 "params": { 00:45:18.739 "adrfam": "IPv4", 00:45:18.739 "ctrlr_loss_timeout_sec": 0, 00:45:18.739 "ddgst": false, 00:45:18.739 "fast_io_fail_timeout_sec": 0, 00:45:18.739 "hdgst": false, 00:45:18.739 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:18.739 "name": "nvme0", 00:45:18.739 "prchk_guard": false, 00:45:18.739 "prchk_reftag": false, 00:45:18.739 "psk": "key0", 00:45:18.739 "reconnect_delay_sec": 0, 00:45:18.739 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:18.739 "traddr": "10.0.0.2", 00:45:18.739 "trsvcid": "4420", 00:45:18.739 "trtype": "TCP" 00:45:18.739 } 00:45:18.739 }, 00:45:18.739 { 00:45:18.739 "method": "bdev_nvme_set_hotplug", 00:45:18.739 "params": { 00:45:18.739 "enable": false, 00:45:18.739 "period_us": 100000 00:45:18.739 } 00:45:18.739 }, 00:45:18.739 { 00:45:18.739 "method": "bdev_enable_histogram", 00:45:18.739 "params": { 00:45:18.739 "enable": true, 00:45:18.739 "name": "nvme0n1" 00:45:18.739 } 00:45:18.739 }, 00:45:18.739 { 00:45:18.739 "method": "bdev_wait_for_examine" 00:45:18.739 } 00:45:18.739 ] 00:45:18.739 }, 00:45:18.739 { 00:45:18.739 "subsystem": "nbd", 00:45:18.739 "config": [] 00:45:18.739 } 00:45:18.739 ] 00:45:18.739 }' 00:45:18.739 14:58:38 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 99978 00:45:18.739 14:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 99978 ']' 00:45:18.739 14:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 99978 00:45:18.739 14:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:45:18.739 14:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:45:18.739 14:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 99978 00:45:18.739 14:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:45:18.739 14:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:45:18.739 killing process with pid 99978 00:45:18.739 14:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 99978' 00:45:18.739 14:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 99978 00:45:18.739 Received shutdown signal, test time was about 1.000000 seconds 00:45:18.739 00:45:18.739 Latency(us) 00:45:18.739 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:18.739 =================================================================================================================== 00:45:18.739 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:45:18.739 14:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 99978 00:45:18.998 14:58:38 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 99928 00:45:18.998 14:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 99928 ']' 00:45:18.998 14:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 99928 00:45:18.998 14:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:45:18.998 14:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:45:18.998 14:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 99928 00:45:18.998 14:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:45:18.998 14:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:45:18.998 killing process with pid 99928 00:45:18.998 14:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 99928' 00:45:18.998 14:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 99928 00:45:18.998 14:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 99928 00:45:18.998 14:58:38 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:45:18.998 14:58:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:45:18.998 14:58:38 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:45:18.998 "subsystems": [ 00:45:18.998 { 00:45:18.998 "subsystem": "keyring", 00:45:18.998 "config": [ 00:45:18.998 { 00:45:18.998 "method": "keyring_file_add_key", 00:45:18.998 "params": { 00:45:18.998 "name": "key0", 00:45:18.998 "path": "/tmp/tmp.cG9ne331uX" 00:45:18.998 } 00:45:18.998 } 00:45:18.998 ] 00:45:18.998 }, 00:45:18.998 { 00:45:18.998 "subsystem": "iobuf", 00:45:18.998 "config": [ 00:45:18.998 { 00:45:18.998 "method": "iobuf_set_options", 00:45:18.998 "params": { 00:45:18.998 "large_bufsize": 135168, 00:45:18.998 "large_pool_count": 1024, 00:45:18.998 "small_bufsize": 8192, 00:45:18.998 "small_pool_count": 8192 00:45:18.998 } 00:45:18.998 } 00:45:18.998 ] 00:45:18.998 }, 00:45:18.998 { 00:45:18.998 "subsystem": "sock", 00:45:18.998 "config": [ 00:45:18.998 { 00:45:18.998 "method": "sock_set_default_impl", 00:45:18.998 "params": { 00:45:18.998 "impl_name": "posix" 00:45:18.998 } 00:45:18.998 }, 00:45:18.998 { 00:45:18.998 "method": "sock_impl_set_options", 00:45:18.998 "params": { 00:45:18.998 "enable_ktls": false, 00:45:18.998 "enable_placement_id": 0, 00:45:18.998 "enable_quickack": false, 00:45:18.998 "enable_recv_pipe": true, 00:45:18.998 "enable_zerocopy_send_client": false, 00:45:18.998 "enable_zerocopy_send_server": true, 00:45:18.998 "impl_name": "ssl", 00:45:18.998 "recv_buf_size": 4096, 00:45:18.998 "send_buf_size": 4096, 00:45:18.998 "tls_version": 0, 00:45:18.998 "zerocopy_threshold": 0 00:45:18.998 } 00:45:18.998 }, 00:45:18.998 { 00:45:18.998 "method": "sock_impl_set_options", 00:45:18.998 "params": { 00:45:18.998 "enable_ktls": false, 00:45:18.998 "enable_placement_id": 0, 00:45:18.998 "enable_quickack": false, 00:45:18.998 "enable_recv_pipe": true, 00:45:18.998 "enable_zerocopy_send_client": false, 00:45:18.998 "enable_zerocopy_send_server": true, 00:45:18.998 "impl_name": "posix", 00:45:18.998 "recv_buf_size": 2097152, 00:45:18.998 "send_buf_size": 2097152, 00:45:18.998 "tls_version": 0, 00:45:18.998 "zerocopy_threshold": 0 00:45:18.998 } 00:45:18.998 } 00:45:18.998 ] 00:45:18.998 }, 00:45:18.998 { 00:45:18.998 "subsystem": "vmd", 00:45:18.998 "config": [] 00:45:18.998 }, 00:45:18.998 { 00:45:18.998 "subsystem": "accel", 00:45:18.998 "config": [ 00:45:18.998 { 00:45:18.998 "method": "accel_set_options", 00:45:18.998 "params": { 00:45:18.998 "buf_count": 2048, 00:45:18.998 "large_cache_size": 16, 00:45:18.998 "sequence_count": 2048, 00:45:18.998 "small_cache_size": 128, 00:45:18.998 "task_count": 2048 00:45:18.998 } 00:45:18.998 } 00:45:18.998 ] 00:45:18.998 }, 00:45:18.998 { 00:45:18.998 "subsystem": "bdev", 00:45:18.998 "config": [ 00:45:18.998 { 00:45:18.998 "method": "bdev_set_options", 00:45:18.998 "params": { 00:45:18.998 "bdev_auto_examine": true, 00:45:18.998 "bdev_io_cache_size": 256, 00:45:18.998 "bdev_io_pool_size": 65535, 00:45:18.998 "iobuf_large_cache_size": 16, 00:45:18.998 "iobuf_small_cache_size": 128 00:45:18.998 } 00:45:18.998 }, 00:45:18.998 { 00:45:18.998 "method": "bdev_raid_set_options", 00:45:18.998 "params": { 00:45:18.998 "process_window_size_kb": 1024 00:45:18.998 } 00:45:18.998 }, 00:45:18.998 { 00:45:18.998 "method": "bdev_iscsi_set_options", 00:45:18.998 "params": { 00:45:18.998 "timeout_sec": 30 00:45:18.998 } 00:45:18.998 }, 00:45:18.998 { 00:45:18.998 "method": "bdev_nvme_set_options", 00:45:18.998 "params": { 00:45:18.998 "action_on_timeout": "none", 00:45:18.998 "allow_accel_sequence": false, 00:45:18.998 "arbitration_burst": 0, 00:45:18.998 "bdev_retry_count": 3, 00:45:18.998 "ctrlr_loss_timeout_sec": 0, 00:45:18.998 "delay_cmd_submit": true, 00:45:18.998 "dhchap_dhgroups": [ 00:45:18.998 "null", 00:45:18.998 "ffdhe2048", 00:45:18.998 "ffdhe3072", 00:45:18.998 "ffdhe4096", 00:45:18.998 "ffdhe6144", 00:45:18.998 "ffdhe8192" 00:45:18.998 ], 00:45:18.998 "dhchap_digests": [ 00:45:18.998 "sha256", 00:45:18.998 "sha384", 00:45:18.998 "sha512" 00:45:18.998 ], 00:45:18.998 "disable_auto_failback": false, 00:45:18.998 "fast_io_fail_timeout_sec": 0, 00:45:18.998 "generate_uuids": false, 00:45:18.998 "high_priority_weight": 0, 00:45:18.998 "io_path_stat": false, 00:45:18.998 "io_queue_requests": 0, 00:45:18.998 "keep_alive_timeout_ms": 10000, 00:45:18.998 "low_priority_weight": 0, 00:45:18.998 "medium_priority_weight": 0, 00:45:18.998 "nvme_adminq_poll_period_us": 10000, 00:45:18.998 "nvme_error_stat": false, 00:45:18.998 "nvme_ioq_poll_period_us": 0, 00:45:18.998 "rdma_cm_event_timeout_ms": 0, 00:45:18.998 "rdma_max_cq_size": 0, 00:45:18.998 "rdma_srq_size": 0, 00:45:18.998 "reconnect_delay_sec": 0, 00:45:18.998 "timeout_admin_us": 0, 00:45:18.998 "timeout_us": 0, 00:45:18.998 "transport_ack_timeout": 0, 00:45:18.998 "transport_retry_count": 4, 00:45:18.998 "transport_tos": 0 00:45:18.998 } 00:45:18.998 }, 00:45:18.998 { 00:45:18.998 "method": "bdev_nvme_set_hotplug", 00:45:18.998 "params": { 00:45:18.998 "enable": false, 00:45:18.998 "period_us": 100000 00:45:18.998 } 00:45:18.998 }, 00:45:18.998 { 00:45:18.998 "method": "bdev_malloc_create", 00:45:18.998 "params": { 00:45:18.998 "block_size": 4096, 00:45:18.998 "name": "malloc0", 00:45:18.998 "num_blocks": 8192, 00:45:18.998 "optimal_io_boundary": 0, 00:45:18.998 "physical_block_size": 4096, 00:45:18.998 "uuid": "55122779-c738-4ea9-88ca-4c5d6249f5e0" 00:45:18.998 } 00:45:18.998 }, 00:45:18.998 { 00:45:18.998 "method": "bdev_wait_for_examine" 00:45:18.998 } 00:45:18.998 ] 00:45:18.998 }, 00:45:18.998 { 00:45:18.999 "subsystem": "nbd", 00:45:18.999 "config": [] 00:45:18.999 }, 00:45:18.999 { 00:45:18.999 "subsystem": "scheduler", 00:45:18.999 "config": [ 00:45:18.999 { 00:45:18.999 "method": "framework_set_scheduler", 00:45:18.999 "params": { 00:45:18.999 "name": "static" 00:45:18.999 } 00:45:18.999 } 00:45:18.999 ] 00:45:18.999 }, 00:45:18.999 { 00:45:18.999 "subsystem": "nvmf", 00:45:18.999 "config": [ 00:45:18.999 { 00:45:18.999 "method": "nvmf_set_config", 00:45:18.999 "params": { 00:45:18.999 "admin_cmd_passthru": { 00:45:18.999 "identify_ctrlr": false 00:45:18.999 }, 00:45:18.999 "discovery_filter": "match_any" 00:45:18.999 } 00:45:18.999 }, 00:45:18.999 { 00:45:18.999 "method": "nvmf_set_max_subsystems", 00:45:18.999 "params": { 00:45:18.999 "max_subsystems": 1024 00:45:18.999 } 00:45:18.999 }, 00:45:18.999 { 00:45:18.999 "method": "nvmf_set_crdt", 00:45:18.999 "params": { 00:45:18.999 "crdt1": 0, 00:45:18.999 "crdt2": 0, 00:45:18.999 "crdt3": 0 00:45:18.999 } 00:45:18.999 }, 00:45:18.999 { 00:45:18.999 "method": "nvmf_create_transport", 00:45:18.999 "params": { 00:45:18.999 "abort_timeout_sec": 1, 00:45:18.999 "ack_timeout": 0, 00:45:18.999 "buf_cache_size": 4294967295, 00:45:18.999 "c2h_success": false, 00:45:18.999 "data_wr_pool_size": 0, 00:45:18.999 "dif_insert_or_strip": false, 00:45:18.999 "in_capsule_data_size": 4096, 00:45:18.999 "io_unit_size": 131072, 00:45:18.999 "max_aq_depth": 128, 00:45:18.999 "max_io_qpairs_per_ctrlr": 127, 00:45:18.999 "max_io_size": 131072, 00:45:18.999 "max_queue_depth": 128, 00:45:18.999 "num_shared_buffers": 511, 00:45:18.999 "sock_priority": 0, 00:45:18.999 "trtype": "TCP", 00:45:18.999 "zcopy": false 00:45:18.999 } 00:45:18.999 }, 00:45:18.999 { 00:45:18.999 "method": "nvmf_create_subsystem", 00:45:18.999 "params": { 00:45:18.999 "allow_any_host": false, 00:45:18.999 "ana_report 14:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:45:18.999 ing": false, 00:45:18.999 "max_cntlid": 65519, 00:45:18.999 "max_namespaces": 32, 00:45:18.999 "min_cntlid": 1, 00:45:18.999 "model_number": "SPDK bdev Controller", 00:45:18.999 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:45:18.999 "serial_number": "00000000000000000000" 00:45:18.999 } 00:45:18.999 }, 00:45:18.999 { 00:45:18.999 "method": "nvmf_subsystem_add_host", 00:45:18.999 "params": { 00:45:18.999 "host": "nqn.2016-06.io.spdk:host1", 00:45:18.999 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:45:18.999 "psk": "key0" 00:45:18.999 } 00:45:18.999 }, 00:45:18.999 { 00:45:18.999 "method": "nvmf_subsystem_add_ns", 00:45:18.999 "params": { 00:45:18.999 "namespace": { 00:45:18.999 "bdev_name": "malloc0", 00:45:18.999 "nguid": "55122779C7384EA988CA4C5D6249F5E0", 00:45:18.999 "no_auto_visible": false, 00:45:18.999 "nsid": 1, 00:45:18.999 "uuid": "55122779-c738-4ea9-88ca-4c5d6249f5e0" 00:45:18.999 }, 00:45:18.999 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:45:18.999 } 00:45:18.999 }, 00:45:18.999 { 00:45:18.999 "method": "nvmf_subsystem_add_listener", 00:45:18.999 "params": { 00:45:18.999 "listen_address": { 00:45:18.999 "adrfam": "IPv4", 00:45:18.999 "traddr": "10.0.0.2", 00:45:18.999 "trsvcid": "4420", 00:45:18.999 "trtype": "TCP" 00:45:18.999 }, 00:45:18.999 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:45:18.999 "secure_channel": true 00:45:18.999 } 00:45:18.999 } 00:45:18.999 ] 00:45:18.999 } 00:45:18.999 ] 00:45:18.999 }' 00:45:18.999 14:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:45:18.999 14:58:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:45:18.999 14:58:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=100062 00:45:18.999 14:58:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 100062 00:45:18.999 14:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 100062 ']' 00:45:18.999 14:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:18.999 14:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:45:18.999 14:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:18.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:18.999 14:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:45:18.999 14:58:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:45:19.258 [2024-07-22 14:58:38.641881] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:45:19.258 [2024-07-22 14:58:38.641944] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:19.258 [2024-07-22 14:58:38.782994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:19.258 [2024-07-22 14:58:38.832076] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:19.258 [2024-07-22 14:58:38.832119] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:19.258 [2024-07-22 14:58:38.832141] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:19.258 [2024-07-22 14:58:38.832145] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:19.258 [2024-07-22 14:58:38.832149] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:19.258 [2024-07-22 14:58:38.832219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:45:19.515 [2024-07-22 14:58:39.050353] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:19.515 [2024-07-22 14:58:39.082322] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:45:19.515 [2024-07-22 14:58:39.082598] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:20.083 14:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:45:20.083 14:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:45:20.083 14:58:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:45:20.083 14:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:45:20.083 14:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:45:20.083 14:58:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:20.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:45:20.083 14:58:39 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=100106 00:45:20.083 14:58:39 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 100106 /var/tmp/bdevperf.sock 00:45:20.083 14:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 100106 ']' 00:45:20.083 14:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:45:20.083 14:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:45:20.083 14:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:45:20.083 14:58:39 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:45:20.083 14:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:45:20.083 14:58:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:45:20.083 14:58:39 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:45:20.083 "subsystems": [ 00:45:20.083 { 00:45:20.083 "subsystem": "keyring", 00:45:20.083 "config": [ 00:45:20.083 { 00:45:20.083 "method": "keyring_file_add_key", 00:45:20.083 "params": { 00:45:20.083 "name": "key0", 00:45:20.083 "path": "/tmp/tmp.cG9ne331uX" 00:45:20.083 } 00:45:20.083 } 00:45:20.083 ] 00:45:20.083 }, 00:45:20.083 { 00:45:20.083 "subsystem": "iobuf", 00:45:20.083 "config": [ 00:45:20.083 { 00:45:20.083 "method": "iobuf_set_options", 00:45:20.083 "params": { 00:45:20.083 "large_bufsize": 135168, 00:45:20.083 "large_pool_count": 1024, 00:45:20.083 "small_bufsize": 8192, 00:45:20.083 "small_pool_count": 8192 00:45:20.083 } 00:45:20.083 } 00:45:20.083 ] 00:45:20.083 }, 00:45:20.083 { 00:45:20.083 "subsystem": "sock", 00:45:20.083 "config": [ 00:45:20.083 { 00:45:20.083 "method": "sock_set_default_impl", 00:45:20.083 "params": { 00:45:20.083 "impl_name": "posix" 00:45:20.083 } 00:45:20.083 }, 00:45:20.083 { 00:45:20.083 "method": "sock_impl_set_options", 00:45:20.083 "params": { 00:45:20.083 "enable_ktls": false, 00:45:20.083 "enable_placement_id": 0, 00:45:20.083 "enable_quickack": false, 00:45:20.083 "enable_recv_pipe": true, 00:45:20.083 "enable_zerocopy_send_client": false, 00:45:20.083 "enable_zerocopy_send_server": true, 00:45:20.083 "impl_name": "ssl", 00:45:20.083 "recv_buf_size": 4096, 00:45:20.083 "send_buf_size": 4096, 00:45:20.083 "tls_version": 0, 00:45:20.083 "zerocopy_threshold": 0 00:45:20.083 } 00:45:20.083 }, 00:45:20.083 { 00:45:20.083 "method": "sock_impl_set_options", 00:45:20.083 "params": { 00:45:20.083 "enable_ktls": false, 00:45:20.083 "enable_placement_id": 0, 00:45:20.083 "enable_quickack": false, 00:45:20.083 "enable_recv_pipe": true, 00:45:20.083 "enable_zerocopy_send_client": false, 00:45:20.083 "enable_zerocopy_send_server": true, 00:45:20.083 "impl_name": "posix", 00:45:20.083 "recv_buf_size": 2097152, 00:45:20.083 "send_buf_size": 2097152, 00:45:20.083 "tls_version": 0, 00:45:20.083 "zerocopy_threshold": 0 00:45:20.083 } 00:45:20.083 } 00:45:20.083 ] 00:45:20.083 }, 00:45:20.083 { 00:45:20.083 "subsystem": "vmd", 00:45:20.083 "config": [] 00:45:20.083 }, 00:45:20.083 { 00:45:20.083 "subsystem": "accel", 00:45:20.083 "config": [ 00:45:20.083 { 00:45:20.083 "method": "accel_set_options", 00:45:20.083 "params": { 00:45:20.083 "buf_count": 2048, 00:45:20.083 "large_cache_size": 16, 00:45:20.083 "sequence_count": 2048, 00:45:20.083 "small_cache_size": 128, 00:45:20.083 "task_count": 2048 00:45:20.083 } 00:45:20.083 } 00:45:20.083 ] 00:45:20.083 }, 00:45:20.083 { 00:45:20.083 "subsystem": "bdev", 00:45:20.083 "config": [ 00:45:20.083 { 00:45:20.083 "method": "bdev_set_options", 00:45:20.083 "params": { 00:45:20.083 "bdev_auto_examine": true, 00:45:20.083 "bdev_io_cache_size": 256, 00:45:20.083 "bdev_io_pool_size": 65535, 00:45:20.083 "iobuf_large_cache_size": 16, 00:45:20.083 "iobuf_small_cache_size": 128 00:45:20.083 } 00:45:20.083 }, 00:45:20.083 { 00:45:20.083 "method": "bdev_raid_set_options", 00:45:20.083 "params": { 00:45:20.083 "process_window_size_kb": 1024 00:45:20.083 } 00:45:20.083 }, 00:45:20.083 { 00:45:20.083 "method": "bdev_iscsi_set_options", 00:45:20.083 "params": { 00:45:20.083 "timeout_sec": 30 00:45:20.083 } 00:45:20.083 }, 00:45:20.083 { 00:45:20.083 "method": "bdev_nvme_set_options", 00:45:20.083 "params": { 00:45:20.083 "action_on_timeout": "none", 00:45:20.083 "allow_accel_sequence": false, 00:45:20.083 "arbitration_burst": 0, 00:45:20.083 "bdev_retry_count": 3, 00:45:20.083 "ctrlr_loss_timeout_sec": 0, 00:45:20.083 "delay_cmd_submit": true, 00:45:20.083 "dhchap_dhgroups": [ 00:45:20.083 "null", 00:45:20.083 "ffdhe2048", 00:45:20.083 "ffdhe3072", 00:45:20.083 "ffdhe4096", 00:45:20.083 "ffdhe6144", 00:45:20.083 "ffdhe8192" 00:45:20.083 ], 00:45:20.083 "dhchap_digests": [ 00:45:20.083 "sha256", 00:45:20.083 "sha384", 00:45:20.083 "sha512" 00:45:20.083 ], 00:45:20.083 "disable_auto_failback": false, 00:45:20.083 "fast_io_fail_timeout_sec": 0, 00:45:20.083 "generate_uuids": false, 00:45:20.083 "high_priority_weight": 0, 00:45:20.083 "io_path_stat": false, 00:45:20.083 "io_queue_requests": 512, 00:45:20.083 "keep_alive_timeout_ms": 10000, 00:45:20.083 "low_priority_weight": 0, 00:45:20.083 "medium_priority_weight": 0, 00:45:20.083 "nvme_adminq_poll_period_us": 10000, 00:45:20.083 "nvme_error_stat": false, 00:45:20.083 "nvme_ioq_poll_period_us": 0, 00:45:20.083 "rdma_cm_event_timeout_ms": 0, 00:45:20.083 "rdma_max_cq_size": 0, 00:45:20.083 "rdma_srq_size": 0, 00:45:20.083 "reconnect_delay_sec": 0, 00:45:20.083 "timeout_admin_us": 0, 00:45:20.083 "timeout_us": 0, 00:45:20.083 "transport_ack_timeout": 0, 00:45:20.083 "transport_retry_count": 4, 00:45:20.083 "transport_tos": 0 00:45:20.083 } 00:45:20.083 }, 00:45:20.083 { 00:45:20.083 "method": "bdev_nvme_attach_controller", 00:45:20.083 "params": { 00:45:20.083 "adrfam": "IPv4", 00:45:20.083 "ctrlr_loss_timeout_sec": 0, 00:45:20.083 "ddgst": false, 00:45:20.083 "fast_io_fail_timeout_sec": 0, 00:45:20.083 "hdgst": false, 00:45:20.083 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:20.083 "name": "nvme0", 00:45:20.083 "prchk_guard": false, 00:45:20.084 "prchk_reftag": false, 00:45:20.084 "psk": "key0", 00:45:20.084 "reconnect_delay_sec": 0, 00:45:20.084 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:20.084 "traddr": "10.0.0.2", 00:45:20.084 "trsvcid": "4420", 00:45:20.084 "trtype": "TCP" 00:45:20.084 } 00:45:20.084 }, 00:45:20.084 { 00:45:20.084 "method": "bdev_nvme_set_hotplug", 00:45:20.084 "params": { 00:45:20.084 "enable": false, 00:45:20.084 "period_us": 100000 00:45:20.084 } 00:45:20.084 }, 00:45:20.084 { 00:45:20.084 "method": "bdev_enable_histogram", 00:45:20.084 "params": { 00:45:20.084 "enable": true, 00:45:20.084 "name": "nvme0n1" 00:45:20.084 } 00:45:20.084 }, 00:45:20.084 { 00:45:20.084 "method": "bdev_wait_for_examine" 00:45:20.084 } 00:45:20.084 ] 00:45:20.084 }, 00:45:20.084 { 00:45:20.084 "subsystem": "nbd", 00:45:20.084 "config": [] 00:45:20.084 } 00:45:20.084 ] 00:45:20.084 }' 00:45:20.084 [2024-07-22 14:58:39.606997] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:45:20.084 [2024-07-22 14:58:39.607482] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100106 ] 00:45:20.342 [2024-07-22 14:58:39.745354] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:20.342 [2024-07-22 14:58:39.796399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:45:20.342 [2024-07-22 14:58:39.942070] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:45:20.908 14:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:45:20.908 14:58:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:45:20.908 14:58:40 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:45:20.908 14:58:40 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:45:21.167 14:58:40 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:45:21.167 14:58:40 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:45:21.167 Running I/O for 1 seconds... 00:45:22.539 00:45:22.539 Latency(us) 00:45:22.539 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:22.539 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:45:22.539 Verification LBA range: start 0x0 length 0x2000 00:45:22.539 nvme0n1 : 1.01 6550.93 25.59 0.00 0.00 19393.04 4578.93 14423.64 00:45:22.539 =================================================================================================================== 00:45:22.539 Total : 6550.93 25.59 0.00 0.00 19393.04 4578.93 14423.64 00:45:22.539 0 00:45:22.539 14:58:41 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:45:22.539 14:58:41 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:45:22.539 14:58:41 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:45:22.539 14:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@804 -- # type=--id 00:45:22.539 14:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # id=0 00:45:22.539 14:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:45:22.539 14:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:45:22.539 14:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:45:22.539 14:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:45:22.539 14:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@816 -- # for n in $shm_files 00:45:22.540 14:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:45:22.540 nvmf_trace.0 00:45:22.540 14:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # return 0 00:45:22.540 14:58:41 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 100106 00:45:22.540 14:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 100106 ']' 00:45:22.540 14:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 100106 00:45:22.540 14:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:45:22.540 14:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:45:22.540 14:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 100106 00:45:22.540 14:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:45:22.540 14:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:45:22.540 killing process with pid 100106 00:45:22.540 14:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 100106' 00:45:22.540 14:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 100106 00:45:22.540 Received shutdown signal, test time was about 1.000000 seconds 00:45:22.540 00:45:22.540 Latency(us) 00:45:22.540 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:22.540 =================================================================================================================== 00:45:22.540 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:45:22.540 14:58:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 100106 00:45:22.540 14:58:42 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:45:22.540 14:58:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:45:22.540 14:58:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:45:22.540 14:58:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:45:22.540 14:58:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:45:22.540 14:58:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:45:22.540 14:58:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:45:22.540 rmmod nvme_tcp 00:45:22.540 rmmod nvme_fabrics 00:45:22.798 rmmod nvme_keyring 00:45:22.798 14:58:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:45:22.798 14:58:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:45:22.798 14:58:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:45:22.798 14:58:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 100062 ']' 00:45:22.798 14:58:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 100062 00:45:22.798 14:58:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 100062 ']' 00:45:22.798 14:58:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 100062 00:45:22.798 14:58:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:45:22.798 14:58:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:45:22.798 14:58:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 100062 00:45:22.798 14:58:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:45:22.798 14:58:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:45:22.798 14:58:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 100062' 00:45:22.798 killing process with pid 100062 00:45:22.798 14:58:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 100062 00:45:22.798 14:58:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 100062 00:45:23.056 14:58:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:45:23.056 14:58:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:45:23.056 14:58:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:45:23.056 14:58:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:45:23.056 14:58:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:45:23.056 14:58:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:23.056 14:58:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:45:23.056 14:58:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:23.056 14:58:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:45:23.056 14:58:42 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.uXTrvpqFUj /tmp/tmp.ybU7h9gXr5 /tmp/tmp.cG9ne331uX 00:45:23.056 00:45:23.056 real 1m19.944s 00:45:23.056 user 2m4.674s 00:45:23.056 sys 0m26.032s 00:45:23.056 ************************************ 00:45:23.056 END TEST nvmf_tls 00:45:23.056 ************************************ 00:45:23.056 14:58:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # xtrace_disable 00:45:23.056 14:58:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:45:23.056 14:58:42 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:45:23.056 14:58:42 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:45:23.056 14:58:42 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:45:23.056 14:58:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:23.056 ************************************ 00:45:23.056 START TEST nvmf_fips 00:45:23.056 ************************************ 00:45:23.056 14:58:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:45:23.056 * Looking for test storage... 00:45:23.056 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:45:23.056 14:58:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:45:23.056 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:45:23.056 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:23.056 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:23.056 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:23.056 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:23.056 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:23.056 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:23.056 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:23.056 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:23.056 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:23.056 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:23.314 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:45:23.314 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:45:23.314 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:23.314 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:23.314 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:45:23.314 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:23.314 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:45:23.314 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:23.314 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:23.314 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:23.314 14:58:42 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:45:23.315 Error setting digest 00:45:23.315 00E206DEF67F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:45:23.315 00E206DEF67F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:23.315 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:45:23.316 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:45:23.316 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:45:23.316 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:23.316 14:58:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:45:23.316 14:58:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:23.573 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:45:23.573 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:45:23.573 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:45:23.573 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:45:23.573 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:45:23.573 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:45:23.573 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:23.573 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:45:23.573 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:45:23.573 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:45:23.573 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:45:23.573 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:45:23.573 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:45:23.573 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:23.573 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:45:23.573 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:45:23.573 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:45:23.573 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:45:23.573 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:45:23.573 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:45:23.573 Cannot find device "nvmf_tgt_br" 00:45:23.573 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:45:23.573 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:45:23.573 Cannot find device "nvmf_tgt_br2" 00:45:23.573 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:45:23.573 14:58:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:45:23.573 14:58:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:45:23.573 Cannot find device "nvmf_tgt_br" 00:45:23.573 14:58:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:45:23.573 14:58:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:45:23.573 Cannot find device "nvmf_tgt_br2" 00:45:23.573 14:58:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:45:23.573 14:58:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:45:23.573 14:58:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:45:23.573 14:58:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:45:23.573 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:45:23.573 14:58:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:45:23.573 14:58:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:45:23.573 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:45:23.573 14:58:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:45:23.573 14:58:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:45:23.573 14:58:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:45:23.573 14:58:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:45:23.573 14:58:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:45:23.573 14:58:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:45:23.573 14:58:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:45:23.573 14:58:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:45:23.573 14:58:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:45:23.573 14:58:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:45:23.573 14:58:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:45:23.573 14:58:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:45:23.831 14:58:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:45:23.831 14:58:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:45:23.831 14:58:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:45:23.831 14:58:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:45:23.831 14:58:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:45:23.831 14:58:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:45:23.831 14:58:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:45:23.831 14:58:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:45:23.831 14:58:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:45:23.831 14:58:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:45:23.831 14:58:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:45:23.831 14:58:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:45:23.831 14:58:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:45:23.831 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:23.831 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:45:23.831 00:45:23.831 --- 10.0.0.2 ping statistics --- 00:45:23.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:23.831 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:45:23.831 14:58:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:45:23.831 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:45:23.831 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:45:23.831 00:45:23.831 --- 10.0.0.3 ping statistics --- 00:45:23.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:23.831 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:45:23.831 14:58:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:45:23.831 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:23.831 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:45:23.831 00:45:23.831 --- 10.0.0.1 ping statistics --- 00:45:23.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:23.831 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:45:23.831 14:58:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:23.831 14:58:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:45:23.831 14:58:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:45:23.831 14:58:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:23.831 14:58:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:45:23.831 14:58:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:45:23.831 14:58:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:23.831 14:58:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:45:23.831 14:58:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:45:23.831 14:58:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:45:23.831 14:58:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:45:23.831 14:58:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@720 -- # xtrace_disable 00:45:23.831 14:58:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:45:23.831 14:58:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=100393 00:45:23.831 14:58:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:45:23.831 14:58:43 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 100393 00:45:23.831 14:58:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 100393 ']' 00:45:23.831 14:58:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:23.831 14:58:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:45:23.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:23.831 14:58:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:23.831 14:58:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:45:23.831 14:58:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:45:23.831 [2024-07-22 14:58:43.424856] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:45:23.831 [2024-07-22 14:58:43.424931] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:24.124 [2024-07-22 14:58:43.562528] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:24.124 [2024-07-22 14:58:43.613623] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:24.124 [2024-07-22 14:58:43.613841] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:24.124 [2024-07-22 14:58:43.613886] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:24.124 [2024-07-22 14:58:43.613983] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:24.124 [2024-07-22 14:58:43.614033] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:24.124 [2024-07-22 14:58:43.614141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:45:24.693 14:58:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:45:24.693 14:58:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:45:24.693 14:58:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:45:24.693 14:58:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:45:24.693 14:58:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:45:24.693 14:58:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:24.693 14:58:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:45:24.693 14:58:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:45:24.693 14:58:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:45:24.693 14:58:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:45:24.693 14:58:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:45:24.693 14:58:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:45:24.693 14:58:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:45:24.693 14:58:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:45:24.952 [2024-07-22 14:58:44.459005] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:24.953 [2024-07-22 14:58:44.474947] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:45:24.953 [2024-07-22 14:58:44.475107] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:24.953 [2024-07-22 14:58:44.503158] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:45:24.953 malloc0 00:45:24.953 14:58:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:45:24.953 14:58:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=100445 00:45:24.953 14:58:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:45:24.953 14:58:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 100445 /var/tmp/bdevperf.sock 00:45:24.953 14:58:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 100445 ']' 00:45:24.953 14:58:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:45:24.953 14:58:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:45:24.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:45:24.953 14:58:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:45:24.953 14:58:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:45:24.953 14:58:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:45:25.213 [2024-07-22 14:58:44.611047] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:45:25.213 [2024-07-22 14:58:44.611122] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100445 ] 00:45:25.213 [2024-07-22 14:58:44.750292] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:25.213 [2024-07-22 14:58:44.798281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:45:26.152 14:58:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:45:26.152 14:58:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:45:26.152 14:58:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:45:26.152 [2024-07-22 14:58:45.587455] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:45:26.152 [2024-07-22 14:58:45.587547] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:45:26.152 TLSTESTn1 00:45:26.152 14:58:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:45:26.152 Running I/O for 10 seconds... 00:45:36.159 00:45:36.159 Latency(us) 00:45:36.159 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:36.159 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:45:36.159 Verification LBA range: start 0x0 length 0x2000 00:45:36.159 TLSTESTn1 : 10.01 6411.31 25.04 0.00 0.00 19932.54 4121.04 20376.26 00:45:36.159 =================================================================================================================== 00:45:36.159 Total : 6411.31 25.04 0.00 0.00 19932.54 4121.04 20376.26 00:45:36.159 0 00:45:36.159 14:58:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:45:36.159 14:58:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:45:36.159 14:58:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@804 -- # type=--id 00:45:36.159 14:58:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # id=0 00:45:36.159 14:58:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:45:36.159 14:58:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:45:36.159 14:58:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:45:36.159 14:58:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:45:36.159 14:58:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@816 -- # for n in $shm_files 00:45:36.159 14:58:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:45:36.159 nvmf_trace.0 00:45:36.419 14:58:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # return 0 00:45:36.420 14:58:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 100445 00:45:36.420 14:58:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 100445 ']' 00:45:36.420 14:58:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 100445 00:45:36.420 14:58:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:45:36.420 14:58:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:45:36.420 14:58:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 100445 00:45:36.420 14:58:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:45:36.420 14:58:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:45:36.420 killing process with pid 100445 00:45:36.420 14:58:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 100445' 00:45:36.420 14:58:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 100445 00:45:36.420 Received shutdown signal, test time was about 10.000000 seconds 00:45:36.420 00:45:36.420 Latency(us) 00:45:36.420 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:36.420 =================================================================================================================== 00:45:36.420 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:45:36.420 [2024-07-22 14:58:55.896025] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:45:36.420 14:58:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 100445 00:45:36.679 14:58:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:45:36.679 14:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:45:36.679 14:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:45:36.679 14:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:45:36.680 14:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:45:36.680 14:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:45:36.680 14:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:45:36.680 rmmod nvme_tcp 00:45:36.680 rmmod nvme_fabrics 00:45:36.680 rmmod nvme_keyring 00:45:36.680 14:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:45:36.680 14:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:45:36.680 14:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:45:36.680 14:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 100393 ']' 00:45:36.680 14:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 100393 00:45:36.680 14:58:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 100393 ']' 00:45:36.680 14:58:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 100393 00:45:36.680 14:58:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:45:36.680 14:58:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:45:36.680 14:58:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 100393 00:45:36.680 14:58:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:45:36.680 14:58:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:45:36.680 killing process with pid 100393 00:45:36.680 14:58:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 100393' 00:45:36.680 14:58:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 100393 00:45:36.680 [2024-07-22 14:58:56.252165] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:45:36.680 14:58:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 100393 00:45:36.940 14:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:45:36.940 14:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:45:36.940 14:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:45:36.940 14:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:45:36.940 14:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:45:36.940 14:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:36.940 14:58:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:45:36.940 14:58:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:36.940 14:58:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:45:36.940 14:58:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:45:36.940 ************************************ 00:45:36.940 END TEST nvmf_fips 00:45:36.940 ************************************ 00:45:36.940 00:45:36.940 real 0m13.958s 00:45:36.940 user 0m18.845s 00:45:36.940 sys 0m5.452s 00:45:36.940 14:58:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # xtrace_disable 00:45:36.940 14:58:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:45:36.940 14:58:56 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:45:36.940 14:58:56 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:45:36.940 14:58:56 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:45:36.940 14:58:56 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:45:36.940 14:58:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:37.200 ************************************ 00:45:37.200 START TEST nvmf_fuzz 00:45:37.200 ************************************ 00:45:37.200 14:58:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:45:37.200 * Looking for test storage... 00:45:37.200 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:45:37.200 14:58:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:45:37.200 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:45:37.200 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:37.200 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:37.200 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:37.200 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:37.200 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:37.200 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:37.200 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:37.200 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:37.200 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:37.200 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:37.200 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:45:37.200 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:45:37.200 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:37.200 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:37.200 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:45:37.200 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:37.200 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:45:37.200 14:58:56 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:37.200 14:58:56 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:37.200 14:58:56 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:37.200 14:58:56 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:37.200 14:58:56 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:37.200 14:58:56 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:37.200 14:58:56 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:45:37.200 14:58:56 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:37.201 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:45:37.201 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:45:37.201 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:45:37.201 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:37.201 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:37.201 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:37.201 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:45:37.201 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:45:37.201 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:45:37.201 14:58:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:45:37.201 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:45:37.201 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:37.201 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:45:37.201 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:45:37.201 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:45:37.201 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:37.201 14:58:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:45:37.201 14:58:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:37.201 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:45:37.201 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:45:37.201 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:45:37.201 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:45:37.201 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:45:37.201 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@432 -- # nvmf_veth_init 00:45:37.201 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:37.201 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:45:37.201 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:45:37.201 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:45:37.201 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:45:37.201 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:45:37.201 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:45:37.201 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:37.201 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:45:37.201 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:45:37.201 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:45:37.201 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:45:37.201 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:45:37.201 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:45:37.201 Cannot find device "nvmf_tgt_br" 00:45:37.201 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@155 -- # true 00:45:37.201 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:45:37.201 Cannot find device "nvmf_tgt_br2" 00:45:37.201 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@156 -- # true 00:45:37.201 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:45:37.201 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:45:37.201 Cannot find device "nvmf_tgt_br" 00:45:37.201 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@158 -- # true 00:45:37.201 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:45:37.462 Cannot find device "nvmf_tgt_br2" 00:45:37.462 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@159 -- # true 00:45:37.462 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:45:37.462 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:45:37.462 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:45:37.462 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:45:37.462 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@162 -- # true 00:45:37.462 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:45:37.462 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:45:37.462 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@163 -- # true 00:45:37.462 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:45:37.462 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:45:37.462 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:45:37.462 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:45:37.462 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:45:37.462 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:45:37.462 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:45:37.462 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:45:37.462 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:45:37.462 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:45:37.462 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:45:37.462 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:45:37.462 14:58:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:45:37.462 14:58:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:45:37.462 14:58:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:45:37.462 14:58:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:45:37.462 14:58:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:45:37.462 14:58:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:45:37.462 14:58:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:45:37.462 14:58:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:45:37.462 14:58:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:45:37.462 14:58:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:45:37.462 14:58:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:45:37.722 14:58:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:45:37.722 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:37.722 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:45:37.722 00:45:37.722 --- 10.0.0.2 ping statistics --- 00:45:37.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:37.722 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:45:37.722 14:58:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:45:37.722 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:45:37.722 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.098 ms 00:45:37.722 00:45:37.722 --- 10.0.0.3 ping statistics --- 00:45:37.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:37.722 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:45:37.722 14:58:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:45:37.722 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:37.722 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:45:37.722 00:45:37.722 --- 10.0.0.1 ping statistics --- 00:45:37.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:37.722 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:45:37.722 14:58:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:37.722 14:58:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@433 -- # return 0 00:45:37.722 14:58:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:45:37.722 14:58:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:37.722 14:58:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:45:37.722 14:58:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:45:37.722 14:58:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:37.722 14:58:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:45:37.722 14:58:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:45:37.722 14:58:57 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=100781 00:45:37.722 14:58:57 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:45:37.722 14:58:57 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:45:37.722 14:58:57 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 100781 00:45:37.722 14:58:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@827 -- # '[' -z 100781 ']' 00:45:37.722 14:58:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:37.722 14:58:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:45:37.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:37.722 14:58:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:37.722 14:58:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:45:37.722 14:58:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:45:38.661 14:58:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:45:38.661 14:58:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@860 -- # return 0 00:45:38.661 14:58:58 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:45:38.661 14:58:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:38.661 14:58:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:45:38.661 14:58:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:38.661 14:58:58 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:45:38.661 14:58:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:38.661 14:58:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:45:38.661 Malloc0 00:45:38.661 14:58:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:38.661 14:58:58 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:45:38.661 14:58:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:38.661 14:58:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:45:38.661 14:58:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:38.661 14:58:58 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:45:38.661 14:58:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:38.661 14:58:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:45:38.661 14:58:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:38.661 14:58:58 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:38.661 14:58:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:38.661 14:58:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:45:38.661 14:58:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:38.661 14:58:58 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:45:38.661 14:58:58 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:45:38.920 Shutting down the fuzz application 00:45:38.920 14:58:58 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:45:39.180 Shutting down the fuzz application 00:45:39.180 14:58:58 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:39.180 14:58:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:39.180 14:58:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:45:39.180 14:58:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:39.180 14:58:58 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:45:39.180 14:58:58 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:45:39.180 14:58:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:45:39.180 14:58:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:45:39.180 14:58:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:45:39.180 14:58:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:45:39.180 14:58:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:45:39.180 14:58:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:45:39.180 rmmod nvme_tcp 00:45:39.180 rmmod nvme_fabrics 00:45:39.439 rmmod nvme_keyring 00:45:39.439 14:58:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:45:39.439 14:58:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:45:39.439 14:58:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:45:39.439 14:58:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 100781 ']' 00:45:39.439 14:58:58 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 100781 00:45:39.439 14:58:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@946 -- # '[' -z 100781 ']' 00:45:39.439 14:58:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@950 -- # kill -0 100781 00:45:39.439 14:58:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # uname 00:45:39.439 14:58:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:45:39.439 14:58:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 100781 00:45:39.439 14:58:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:45:39.439 14:58:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:45:39.439 killing process with pid 100781 00:45:39.439 14:58:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 100781' 00:45:39.439 14:58:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@965 -- # kill 100781 00:45:39.439 14:58:58 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@970 -- # wait 100781 00:45:39.698 14:58:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:45:39.698 14:58:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:45:39.698 14:58:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:45:39.698 14:58:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:45:39.698 14:58:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:45:39.698 14:58:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:39.699 14:58:59 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:45:39.699 14:58:59 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:39.699 14:58:59 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:45:39.699 14:58:59 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:45:39.699 00:45:39.699 real 0m2.581s 00:45:39.699 user 0m2.456s 00:45:39.699 sys 0m0.693s 00:45:39.699 14:58:59 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:45:39.699 14:58:59 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:45:39.699 ************************************ 00:45:39.699 END TEST nvmf_fuzz 00:45:39.699 ************************************ 00:45:39.699 14:58:59 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:45:39.699 14:58:59 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:45:39.699 14:58:59 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:45:39.699 14:58:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:45:39.699 ************************************ 00:45:39.699 START TEST nvmf_multiconnection 00:45:39.699 ************************************ 00:45:39.699 14:58:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:45:39.699 * Looking for test storage... 00:45:39.958 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:45:39.958 14:58:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:45:39.958 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:45:39.958 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:39.958 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:39.958 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:39.958 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:39.958 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:39.958 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:39.958 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:39.958 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:39.958 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:39.958 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:39.958 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:45:39.958 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:45:39.958 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:39.958 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:39.958 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:45:39.958 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:39.958 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:45:39.958 14:58:59 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:39.958 14:58:59 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:39.958 14:58:59 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:39.958 14:58:59 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@432 -- # nvmf_veth_init 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:45:39.959 Cannot find device "nvmf_tgt_br" 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@155 -- # true 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:45:39.959 Cannot find device "nvmf_tgt_br2" 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@156 -- # true 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:45:39.959 Cannot find device "nvmf_tgt_br" 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@158 -- # true 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:45:39.959 Cannot find device "nvmf_tgt_br2" 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@159 -- # true 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:45:39.959 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:45:39.959 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:45:39.959 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:45:40.220 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:45:40.220 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:45:40.220 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:45:40.220 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:45:40.220 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:45:40.220 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:45:40.220 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:45:40.220 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:45:40.220 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:45:40.220 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:45:40.220 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:45:40.220 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:45:40.220 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:45:40.220 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:45:40.220 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:45:40.220 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:45:40.220 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:45:40.220 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:45:40.220 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:45:40.220 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:40.220 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:45:40.220 00:45:40.220 --- 10.0.0.2 ping statistics --- 00:45:40.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:40.220 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:45:40.220 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:45:40.220 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:45:40.220 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:45:40.220 00:45:40.220 --- 10.0.0.3 ping statistics --- 00:45:40.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:40.220 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:45:40.220 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:45:40.220 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:40.220 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:45:40.220 00:45:40.220 --- 10.0.0.1 ping statistics --- 00:45:40.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:40.220 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:45:40.220 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:40.220 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@433 -- # return 0 00:45:40.220 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:45:40.220 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:40.220 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:45:40.220 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:45:40.220 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:40.220 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:45:40.220 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:45:40.220 14:58:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:45:40.220 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:45:40.220 14:58:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@720 -- # xtrace_disable 00:45:40.220 14:58:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:45:40.220 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=101002 00:45:40.220 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:45:40.220 14:58:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 101002 00:45:40.220 14:58:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@827 -- # '[' -z 101002 ']' 00:45:40.220 14:58:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:40.220 14:58:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@832 -- # local max_retries=100 00:45:40.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:40.220 14:58:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:40.220 14:58:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # xtrace_disable 00:45:40.220 14:58:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:45:40.480 [2024-07-22 14:58:59.868444] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:45:40.480 [2024-07-22 14:58:59.868507] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:40.480 [2024-07-22 14:59:00.007945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:45:40.480 [2024-07-22 14:59:00.058544] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:40.480 [2024-07-22 14:59:00.058595] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:40.480 [2024-07-22 14:59:00.058601] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:40.480 [2024-07-22 14:59:00.058606] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:40.480 [2024-07-22 14:59:00.058609] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:40.480 [2024-07-22 14:59:00.058807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:45:40.480 [2024-07-22 14:59:00.059012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:45:40.480 [2024-07-22 14:59:00.059052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:45:40.480 [2024-07-22 14:59:00.059051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:45:41.443 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:45:41.443 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@860 -- # return 0 00:45:41.443 14:59:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:45:41.443 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:45:41.443 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:45:41.443 14:59:00 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:41.443 14:59:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:45:41.443 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:41.443 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:45:41.443 [2024-07-22 14:59:00.765434] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:41.443 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:41.443 14:59:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:45:41.443 14:59:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:45:41.443 14:59:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:45:41.443 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:41.443 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:45:41.443 Malloc1 00:45:41.443 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:41.443 14:59:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:45:41.443 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:41.443 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:45:41.443 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:41.443 14:59:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:45:41.443 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:41.443 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:45:41.443 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:41.443 14:59:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:41.443 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:41.443 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:45:41.443 [2024-07-22 14:59:00.831239] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:41.443 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:41.443 14:59:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:45:41.443 14:59:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:45:41.443 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:45:41.444 Malloc2 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:45:41.444 Malloc3 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:45:41.444 Malloc4 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:45:41.444 Malloc5 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:41.444 14:59:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:45:41.444 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:41.444 14:59:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:45:41.444 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:41.444 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:45:41.444 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:41.444 14:59:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:45:41.444 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:41.444 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:45:41.444 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:41.444 14:59:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:45:41.444 14:59:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:45:41.444 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:41.444 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:45:41.444 Malloc6 00:45:41.444 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:41.444 14:59:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:45:41.444 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:41.444 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:45:41.444 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:41.444 14:59:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:45:41.444 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:41.444 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:45:41.444 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:41.444 14:59:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:45:41.444 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:41.444 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:45:41.704 Malloc7 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:45:41.704 Malloc8 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:45:41.704 Malloc9 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:45:41.704 Malloc10 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:45:41.704 Malloc11 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:41.704 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:45:41.705 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:41.705 14:59:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:45:41.705 14:59:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:45:41.705 14:59:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:45:41.964 14:59:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:45:41.964 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:45:41.964 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:45:41.964 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:45:41.964 14:59:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:45:43.868 14:59:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:45:43.868 14:59:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:45:43.868 14:59:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK1 00:45:43.868 14:59:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:45:43.868 14:59:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:45:43.868 14:59:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:45:43.868 14:59:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:45:43.868 14:59:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:45:44.127 14:59:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:45:44.127 14:59:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:45:44.127 14:59:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:45:44.128 14:59:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:45:44.128 14:59:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:45:46.664 14:59:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:45:46.664 14:59:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:45:46.664 14:59:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK2 00:45:46.664 14:59:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:45:46.664 14:59:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:45:46.664 14:59:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:45:46.664 14:59:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:45:46.664 14:59:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:45:46.664 14:59:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:45:46.664 14:59:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:45:46.664 14:59:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:45:46.664 14:59:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:45:46.664 14:59:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:45:48.568 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:45:48.568 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:45:48.568 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK3 00:45:48.568 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:45:48.568 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:45:48.568 14:59:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:45:48.568 14:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:45:48.568 14:59:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:45:48.568 14:59:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:45:48.568 14:59:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:45:48.568 14:59:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:45:48.568 14:59:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:45:48.568 14:59:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:45:50.474 14:59:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:45:50.474 14:59:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:45:50.474 14:59:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK4 00:45:50.474 14:59:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:45:50.474 14:59:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:45:50.474 14:59:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:45:50.474 14:59:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:45:50.474 14:59:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:45:50.733 14:59:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:45:50.733 14:59:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:45:50.733 14:59:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:45:50.733 14:59:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:45:50.733 14:59:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:45:53.268 14:59:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:45:53.268 14:59:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:45:53.268 14:59:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK5 00:45:53.268 14:59:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:45:53.268 14:59:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:45:53.268 14:59:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:45:53.268 14:59:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:45:53.268 14:59:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:45:53.268 14:59:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:45:53.268 14:59:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:45:53.268 14:59:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:45:53.268 14:59:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:45:53.268 14:59:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:45:55.191 14:59:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:45:55.192 14:59:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:45:55.192 14:59:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK6 00:45:55.192 14:59:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:45:55.192 14:59:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:45:55.192 14:59:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:45:55.192 14:59:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:45:55.192 14:59:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:45:55.192 14:59:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:45:55.192 14:59:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:45:55.192 14:59:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:45:55.192 14:59:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:45:55.192 14:59:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:45:57.096 14:59:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:45:57.096 14:59:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:45:57.096 14:59:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK7 00:45:57.096 14:59:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:45:57.096 14:59:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:45:57.096 14:59:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:45:57.096 14:59:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:45:57.096 14:59:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:45:57.355 14:59:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:45:57.355 14:59:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:45:57.355 14:59:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:45:57.355 14:59:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:45:57.355 14:59:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:45:59.258 14:59:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:45:59.258 14:59:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:45:59.258 14:59:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK8 00:45:59.517 14:59:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:45:59.517 14:59:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:45:59.517 14:59:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:45:59.517 14:59:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:45:59.517 14:59:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:45:59.517 14:59:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:45:59.517 14:59:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:45:59.517 14:59:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:45:59.517 14:59:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:45:59.517 14:59:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:46:02.050 14:59:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:46:02.050 14:59:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:46:02.050 14:59:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK9 00:46:02.050 14:59:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:46:02.050 14:59:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:46:02.050 14:59:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:46:02.050 14:59:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:46:02.050 14:59:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:46:02.050 14:59:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:46:02.050 14:59:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:46:02.050 14:59:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:46:02.050 14:59:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:46:02.050 14:59:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:46:03.952 14:59:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:46:03.952 14:59:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:46:03.952 14:59:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK10 00:46:03.952 14:59:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:46:03.952 14:59:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:46:03.953 14:59:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:46:03.953 14:59:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:46:03.953 14:59:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:46:03.953 14:59:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:46:03.953 14:59:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:46:03.953 14:59:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:46:03.953 14:59:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:46:03.953 14:59:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:46:06.483 14:59:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:46:06.483 14:59:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:46:06.483 14:59:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK11 00:46:06.483 14:59:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:46:06.483 14:59:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:46:06.483 14:59:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:46:06.483 14:59:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:46:06.483 [global] 00:46:06.483 thread=1 00:46:06.483 invalidate=1 00:46:06.483 rw=read 00:46:06.483 time_based=1 00:46:06.483 runtime=10 00:46:06.483 ioengine=libaio 00:46:06.483 direct=1 00:46:06.483 bs=262144 00:46:06.483 iodepth=64 00:46:06.483 norandommap=1 00:46:06.483 numjobs=1 00:46:06.483 00:46:06.483 [job0] 00:46:06.484 filename=/dev/nvme0n1 00:46:06.484 [job1] 00:46:06.484 filename=/dev/nvme10n1 00:46:06.484 [job2] 00:46:06.484 filename=/dev/nvme1n1 00:46:06.484 [job3] 00:46:06.484 filename=/dev/nvme2n1 00:46:06.484 [job4] 00:46:06.484 filename=/dev/nvme3n1 00:46:06.484 [job5] 00:46:06.484 filename=/dev/nvme4n1 00:46:06.484 [job6] 00:46:06.484 filename=/dev/nvme5n1 00:46:06.484 [job7] 00:46:06.484 filename=/dev/nvme6n1 00:46:06.484 [job8] 00:46:06.484 filename=/dev/nvme7n1 00:46:06.484 [job9] 00:46:06.484 filename=/dev/nvme8n1 00:46:06.484 [job10] 00:46:06.484 filename=/dev/nvme9n1 00:46:06.484 Could not set queue depth (nvme0n1) 00:46:06.484 Could not set queue depth (nvme10n1) 00:46:06.484 Could not set queue depth (nvme1n1) 00:46:06.484 Could not set queue depth (nvme2n1) 00:46:06.484 Could not set queue depth (nvme3n1) 00:46:06.484 Could not set queue depth (nvme4n1) 00:46:06.484 Could not set queue depth (nvme5n1) 00:46:06.484 Could not set queue depth (nvme6n1) 00:46:06.484 Could not set queue depth (nvme7n1) 00:46:06.484 Could not set queue depth (nvme8n1) 00:46:06.484 Could not set queue depth (nvme9n1) 00:46:06.484 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:46:06.484 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:46:06.484 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:46:06.484 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:46:06.484 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:46:06.484 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:46:06.484 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:46:06.484 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:46:06.484 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:46:06.484 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:46:06.484 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:46:06.484 fio-3.35 00:46:06.484 Starting 11 threads 00:46:18.690 00:46:18.690 job0: (groupid=0, jobs=1): err= 0: pid=101474: Mon Jul 22 14:59:36 2024 00:46:18.690 read: IOPS=758, BW=190MiB/s (199MB/s)(1909MiB/10065msec) 00:46:18.690 slat (usec): min=14, max=67740, avg=1234.39, stdev=4293.91 00:46:18.690 clat (msec): min=13, max=179, avg=82.96, stdev=23.88 00:46:18.690 lat (msec): min=13, max=193, avg=84.19, stdev=24.36 00:46:18.690 clat percentiles (msec): 00:46:18.690 | 1.00th=[ 28], 5.00th=[ 46], 10.00th=[ 53], 20.00th=[ 61], 00:46:18.690 | 30.00th=[ 72], 40.00th=[ 80], 50.00th=[ 84], 60.00th=[ 88], 00:46:18.690 | 70.00th=[ 94], 80.00th=[ 103], 90.00th=[ 113], 95.00th=[ 124], 00:46:18.690 | 99.00th=[ 144], 99.50th=[ 150], 99.90th=[ 161], 99.95th=[ 178], 00:46:18.690 | 99.99th=[ 180] 00:46:18.690 bw ( KiB/s): min=139264, max=347648, per=8.48%, avg=193724.35, stdev=47694.37, samples=20 00:46:18.690 iops : min= 544, max= 1358, avg=756.55, stdev=186.36, samples=20 00:46:18.690 lat (msec) : 20=0.28%, 50=6.93%, 100=70.18%, 250=22.62% 00:46:18.690 cpu : usr=0.26%, sys=3.48%, ctx=1827, majf=0, minf=4097 00:46:18.690 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:46:18.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:18.690 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:46:18.690 issued rwts: total=7636,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:18.690 latency : target=0, window=0, percentile=100.00%, depth=64 00:46:18.690 job1: (groupid=0, jobs=1): err= 0: pid=101475: Mon Jul 22 14:59:36 2024 00:46:18.690 read: IOPS=722, BW=181MiB/s (189MB/s)(1816MiB/10058msec) 00:46:18.690 slat (usec): min=13, max=56516, avg=1264.11, stdev=4404.40 00:46:18.690 clat (msec): min=12, max=165, avg=87.15, stdev=21.74 00:46:18.690 lat (msec): min=13, max=179, avg=88.42, stdev=22.33 00:46:18.690 clat percentiles (msec): 00:46:18.690 | 1.00th=[ 41], 5.00th=[ 52], 10.00th=[ 59], 20.00th=[ 71], 00:46:18.690 | 30.00th=[ 78], 40.00th=[ 82], 50.00th=[ 86], 60.00th=[ 90], 00:46:18.690 | 70.00th=[ 99], 80.00th=[ 107], 90.00th=[ 115], 95.00th=[ 124], 00:46:18.690 | 99.00th=[ 144], 99.50th=[ 153], 99.90th=[ 167], 99.95th=[ 167], 00:46:18.690 | 99.99th=[ 167] 00:46:18.690 bw ( KiB/s): min=134656, max=250890, per=8.07%, avg=184201.30, stdev=32944.44, samples=20 00:46:18.690 iops : min= 526, max= 980, avg=719.45, stdev=128.65, samples=20 00:46:18.690 lat (msec) : 20=0.22%, 50=3.11%, 100=68.84%, 250=27.83% 00:46:18.690 cpu : usr=0.38%, sys=3.31%, ctx=1578, majf=0, minf=4097 00:46:18.690 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:46:18.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:18.690 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:46:18.690 issued rwts: total=7265,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:18.690 latency : target=0, window=0, percentile=100.00%, depth=64 00:46:18.690 job2: (groupid=0, jobs=1): err= 0: pid=101476: Mon Jul 22 14:59:36 2024 00:46:18.690 read: IOPS=852, BW=213MiB/s (224MB/s)(2150MiB/10086msec) 00:46:18.690 slat (usec): min=12, max=52068, avg=1021.92, stdev=3603.73 00:46:18.690 clat (msec): min=9, max=175, avg=73.86, stdev=25.53 00:46:18.690 lat (msec): min=9, max=175, avg=74.88, stdev=25.94 00:46:18.690 clat percentiles (msec): 00:46:18.690 | 1.00th=[ 22], 5.00th=[ 31], 10.00th=[ 45], 20.00th=[ 54], 00:46:18.690 | 30.00th=[ 59], 40.00th=[ 65], 50.00th=[ 74], 60.00th=[ 81], 00:46:18.690 | 70.00th=[ 86], 80.00th=[ 91], 90.00th=[ 108], 95.00th=[ 122], 00:46:18.690 | 99.00th=[ 142], 99.50th=[ 150], 99.90th=[ 176], 99.95th=[ 176], 00:46:18.690 | 99.99th=[ 176] 00:46:18.690 bw ( KiB/s): min=133609, max=370688, per=9.56%, avg=218404.20, stdev=55605.06, samples=20 00:46:18.690 iops : min= 521, max= 1448, avg=853.00, stdev=217.25, samples=20 00:46:18.690 lat (msec) : 10=0.13%, 20=0.62%, 50=13.22%, 100=72.97%, 250=13.07% 00:46:18.690 cpu : usr=0.30%, sys=4.43%, ctx=1936, majf=0, minf=4097 00:46:18.690 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:46:18.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:18.690 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:46:18.690 issued rwts: total=8600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:18.690 latency : target=0, window=0, percentile=100.00%, depth=64 00:46:18.690 job3: (groupid=0, jobs=1): err= 0: pid=101477: Mon Jul 22 14:59:36 2024 00:46:18.690 read: IOPS=694, BW=174MiB/s (182MB/s)(1750MiB/10075msec) 00:46:18.690 slat (usec): min=12, max=60770, avg=1361.70, stdev=4662.07 00:46:18.690 clat (msec): min=22, max=186, avg=90.57, stdev=19.96 00:46:18.690 lat (msec): min=22, max=196, avg=91.93, stdev=20.57 00:46:18.690 clat percentiles (msec): 00:46:18.690 | 1.00th=[ 45], 5.00th=[ 59], 10.00th=[ 69], 20.00th=[ 77], 00:46:18.690 | 30.00th=[ 81], 40.00th=[ 85], 50.00th=[ 89], 60.00th=[ 93], 00:46:18.690 | 70.00th=[ 101], 80.00th=[ 107], 90.00th=[ 117], 95.00th=[ 125], 00:46:18.690 | 99.00th=[ 153], 99.50th=[ 155], 99.90th=[ 167], 99.95th=[ 169], 00:46:18.690 | 99.99th=[ 186] 00:46:18.690 bw ( KiB/s): min=129024, max=234538, per=7.77%, avg=177398.60, stdev=25138.56, samples=20 00:46:18.690 iops : min= 504, max= 916, avg=692.85, stdev=98.12, samples=20 00:46:18.690 lat (msec) : 50=2.41%, 100=68.12%, 250=29.46% 00:46:18.690 cpu : usr=0.25%, sys=3.84%, ctx=1720, majf=0, minf=4097 00:46:18.690 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:46:18.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:18.690 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:46:18.690 issued rwts: total=6999,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:18.690 latency : target=0, window=0, percentile=100.00%, depth=64 00:46:18.690 job4: (groupid=0, jobs=1): err= 0: pid=101478: Mon Jul 22 14:59:36 2024 00:46:18.690 read: IOPS=660, BW=165MiB/s (173MB/s)(1663MiB/10071msec) 00:46:18.690 slat (usec): min=13, max=59589, avg=1416.57, stdev=4778.54 00:46:18.690 clat (msec): min=15, max=196, avg=95.28, stdev=27.77 00:46:18.690 lat (msec): min=15, max=197, avg=96.70, stdev=28.34 00:46:18.690 clat percentiles (msec): 00:46:18.690 | 1.00th=[ 26], 5.00th=[ 45], 10.00th=[ 68], 20.00th=[ 78], 00:46:18.690 | 30.00th=[ 82], 40.00th=[ 86], 50.00th=[ 91], 60.00th=[ 99], 00:46:18.690 | 70.00th=[ 110], 80.00th=[ 118], 90.00th=[ 129], 95.00th=[ 146], 00:46:18.690 | 99.00th=[ 171], 99.50th=[ 176], 99.90th=[ 190], 99.95th=[ 190], 00:46:18.690 | 99.99th=[ 197] 00:46:18.690 bw ( KiB/s): min=122880, max=304640, per=7.38%, avg=168632.55, stdev=40008.74, samples=20 00:46:18.690 iops : min= 480, max= 1190, avg=658.60, stdev=156.34, samples=20 00:46:18.690 lat (msec) : 20=0.12%, 50=5.44%, 100=56.40%, 250=38.03% 00:46:18.690 cpu : usr=0.23%, sys=3.35%, ctx=1543, majf=0, minf=4097 00:46:18.690 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:46:18.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:18.690 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:46:18.690 issued rwts: total=6652,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:18.690 latency : target=0, window=0, percentile=100.00%, depth=64 00:46:18.690 job5: (groupid=0, jobs=1): err= 0: pid=101479: Mon Jul 22 14:59:36 2024 00:46:18.690 read: IOPS=748, BW=187MiB/s (196MB/s)(1882MiB/10055msec) 00:46:18.690 slat (usec): min=14, max=67019, avg=1237.52, stdev=4151.63 00:46:18.690 clat (msec): min=10, max=181, avg=84.10, stdev=21.51 00:46:18.690 lat (msec): min=11, max=181, avg=85.34, stdev=21.95 00:46:18.690 clat percentiles (msec): 00:46:18.690 | 1.00th=[ 23], 5.00th=[ 46], 10.00th=[ 58], 20.00th=[ 72], 00:46:18.690 | 30.00th=[ 78], 40.00th=[ 82], 50.00th=[ 85], 60.00th=[ 88], 00:46:18.690 | 70.00th=[ 92], 80.00th=[ 100], 90.00th=[ 110], 95.00th=[ 118], 00:46:18.690 | 99.00th=[ 142], 99.50th=[ 153], 99.90th=[ 165], 99.95th=[ 165], 00:46:18.690 | 99.99th=[ 182] 00:46:18.690 bw ( KiB/s): min=149716, max=246272, per=8.36%, avg=190994.85, stdev=22465.55, samples=20 00:46:18.690 iops : min= 584, max= 962, avg=746.00, stdev=87.80, samples=20 00:46:18.690 lat (msec) : 20=0.48%, 50=5.41%, 100=75.41%, 250=18.71% 00:46:18.690 cpu : usr=0.32%, sys=3.74%, ctx=1817, majf=0, minf=4097 00:46:18.690 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:46:18.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:18.690 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:46:18.690 issued rwts: total=7526,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:18.690 latency : target=0, window=0, percentile=100.00%, depth=64 00:46:18.690 job6: (groupid=0, jobs=1): err= 0: pid=101480: Mon Jul 22 14:59:36 2024 00:46:18.690 read: IOPS=894, BW=224MiB/s (234MB/s)(2252MiB/10071msec) 00:46:18.690 slat (usec): min=13, max=83577, avg=980.07, stdev=3667.80 00:46:18.690 clat (msec): min=10, max=181, avg=70.36, stdev=26.25 00:46:18.690 lat (msec): min=10, max=232, avg=71.34, stdev=26.66 00:46:18.690 clat percentiles (msec): 00:46:18.690 | 1.00th=[ 20], 5.00th=[ 29], 10.00th=[ 41], 20.00th=[ 51], 00:46:18.690 | 30.00th=[ 56], 40.00th=[ 60], 50.00th=[ 66], 60.00th=[ 75], 00:46:18.690 | 70.00th=[ 83], 80.00th=[ 91], 90.00th=[ 108], 95.00th=[ 116], 00:46:18.690 | 99.00th=[ 140], 99.50th=[ 163], 99.90th=[ 176], 99.95th=[ 182], 00:46:18.690 | 99.99th=[ 182] 00:46:18.690 bw ( KiB/s): min=152576, max=459880, per=10.02%, avg=228838.75, stdev=73720.52, samples=20 00:46:18.690 iops : min= 596, max= 1796, avg=893.70, stdev=287.96, samples=20 00:46:18.690 lat (msec) : 20=1.15%, 50=17.26%, 100=67.03%, 250=14.55% 00:46:18.690 cpu : usr=0.27%, sys=4.02%, ctx=2049, majf=0, minf=4097 00:46:18.690 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:46:18.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:18.690 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:46:18.690 issued rwts: total=9008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:18.690 latency : target=0, window=0, percentile=100.00%, depth=64 00:46:18.690 job7: (groupid=0, jobs=1): err= 0: pid=101481: Mon Jul 22 14:59:36 2024 00:46:18.690 read: IOPS=1289, BW=322MiB/s (338MB/s)(3231MiB/10021msec) 00:46:18.690 slat (usec): min=12, max=67602, avg=720.28, stdev=2812.10 00:46:18.690 clat (usec): min=1926, max=133418, avg=48782.66, stdev=22657.46 00:46:18.690 lat (usec): min=1981, max=192036, avg=49502.94, stdev=23042.61 00:46:18.690 clat percentiles (msec): 00:46:18.690 | 1.00th=[ 12], 5.00th=[ 21], 10.00th=[ 24], 20.00th=[ 27], 00:46:18.691 | 30.00th=[ 31], 40.00th=[ 38], 50.00th=[ 50], 60.00th=[ 55], 00:46:18.691 | 70.00th=[ 59], 80.00th=[ 67], 90.00th=[ 84], 95.00th=[ 91], 00:46:18.691 | 99.00th=[ 108], 99.50th=[ 111], 99.90th=[ 124], 99.95th=[ 128], 00:46:18.691 | 99.99th=[ 134] 00:46:18.691 bw ( KiB/s): min=178486, max=593408, per=14.40%, avg=328832.75, stdev=127375.23, samples=20 00:46:18.691 iops : min= 697, max= 2318, avg=1284.40, stdev=497.55, samples=20 00:46:18.691 lat (msec) : 2=0.04%, 4=0.12%, 10=0.61%, 20=4.18%, 50=46.17% 00:46:18.691 lat (msec) : 100=47.14%, 250=1.75% 00:46:18.691 cpu : usr=0.48%, sys=5.94%, ctx=3263, majf=0, minf=4097 00:46:18.691 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:46:18.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:18.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:46:18.691 issued rwts: total=12922,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:18.691 latency : target=0, window=0, percentile=100.00%, depth=64 00:46:18.691 job8: (groupid=0, jobs=1): err= 0: pid=101482: Mon Jul 22 14:59:36 2024 00:46:18.691 read: IOPS=862, BW=216MiB/s (226MB/s)(2175MiB/10084msec) 00:46:18.691 slat (usec): min=14, max=58886, avg=1058.77, stdev=3837.52 00:46:18.691 clat (msec): min=13, max=194, avg=72.96, stdev=31.69 00:46:18.691 lat (msec): min=13, max=195, avg=74.02, stdev=32.26 00:46:18.691 clat percentiles (msec): 00:46:18.691 | 1.00th=[ 20], 5.00th=[ 25], 10.00th=[ 28], 20.00th=[ 37], 00:46:18.691 | 30.00th=[ 56], 40.00th=[ 67], 50.00th=[ 77], 60.00th=[ 84], 00:46:18.691 | 70.00th=[ 90], 80.00th=[ 103], 90.00th=[ 114], 95.00th=[ 122], 00:46:18.691 | 99.00th=[ 148], 99.50th=[ 155], 99.90th=[ 165], 99.95th=[ 182], 00:46:18.691 | 99.99th=[ 194] 00:46:18.691 bw ( KiB/s): min=122368, max=570880, per=9.68%, avg=221083.70, stdev=108734.76, samples=20 00:46:18.691 iops : min= 478, max= 2230, avg=863.45, stdev=424.80, samples=20 00:46:18.691 lat (msec) : 20=1.39%, 50=24.75%, 100=52.87%, 250=20.99% 00:46:18.691 cpu : usr=0.28%, sys=4.17%, ctx=2069, majf=0, minf=4097 00:46:18.691 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:46:18.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:18.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:46:18.691 issued rwts: total=8700,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:18.691 latency : target=0, window=0, percentile=100.00%, depth=64 00:46:18.691 job9: (groupid=0, jobs=1): err= 0: pid=101483: Mon Jul 22 14:59:36 2024 00:46:18.691 read: IOPS=728, BW=182MiB/s (191MB/s)(1837MiB/10085msec) 00:46:18.691 slat (usec): min=19, max=56228, avg=1278.81, stdev=4275.17 00:46:18.691 clat (msec): min=18, max=195, avg=86.37, stdev=22.30 00:46:18.691 lat (msec): min=18, max=195, avg=87.65, stdev=22.82 00:46:18.691 clat percentiles (msec): 00:46:18.691 | 1.00th=[ 36], 5.00th=[ 51], 10.00th=[ 57], 20.00th=[ 69], 00:46:18.691 | 30.00th=[ 77], 40.00th=[ 82], 50.00th=[ 86], 60.00th=[ 91], 00:46:18.691 | 70.00th=[ 97], 80.00th=[ 107], 90.00th=[ 114], 95.00th=[ 121], 00:46:18.691 | 99.00th=[ 133], 99.50th=[ 157], 99.90th=[ 197], 99.95th=[ 197], 00:46:18.691 | 99.99th=[ 197] 00:46:18.691 bw ( KiB/s): min=134144, max=245760, per=8.16%, avg=186394.75, stdev=32131.84, samples=20 00:46:18.691 iops : min= 524, max= 960, avg=728.00, stdev=125.60, samples=20 00:46:18.691 lat (msec) : 20=0.14%, 50=4.36%, 100=68.09%, 250=27.42% 00:46:18.691 cpu : usr=0.37%, sys=3.78%, ctx=1665, majf=0, minf=4097 00:46:18.691 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:46:18.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:18.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:46:18.691 issued rwts: total=7346,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:18.691 latency : target=0, window=0, percentile=100.00%, depth=64 00:46:18.691 job10: (groupid=0, jobs=1): err= 0: pid=101484: Mon Jul 22 14:59:36 2024 00:46:18.691 read: IOPS=725, BW=181MiB/s (190MB/s)(1829MiB/10080msec) 00:46:18.691 slat (usec): min=13, max=44331, avg=1257.31, stdev=4148.71 00:46:18.691 clat (msec): min=19, max=180, avg=86.75, stdev=22.02 00:46:18.691 lat (msec): min=20, max=180, avg=88.01, stdev=22.56 00:46:18.691 clat percentiles (msec): 00:46:18.691 | 1.00th=[ 31], 5.00th=[ 48], 10.00th=[ 57], 20.00th=[ 71], 00:46:18.691 | 30.00th=[ 79], 40.00th=[ 83], 50.00th=[ 87], 60.00th=[ 92], 00:46:18.691 | 70.00th=[ 97], 80.00th=[ 106], 90.00th=[ 114], 95.00th=[ 123], 00:46:18.691 | 99.00th=[ 133], 99.50th=[ 142], 99.90th=[ 157], 99.95th=[ 182], 00:46:18.691 | 99.99th=[ 182] 00:46:18.691 bw ( KiB/s): min=137728, max=273884, per=8.12%, avg=185515.25, stdev=35223.43, samples=20 00:46:18.691 iops : min= 538, max= 1069, avg=724.45, stdev=137.53, samples=20 00:46:18.691 lat (msec) : 20=0.01%, 50=6.31%, 100=67.00%, 250=26.67% 00:46:18.691 cpu : usr=0.34%, sys=3.55%, ctx=1835, majf=0, minf=4097 00:46:18.691 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:46:18.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:18.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:46:18.691 issued rwts: total=7316,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:18.691 latency : target=0, window=0, percentile=100.00%, depth=64 00:46:18.691 00:46:18.691 Run status group 0 (all jobs): 00:46:18.691 READ: bw=2230MiB/s (2338MB/s), 165MiB/s-322MiB/s (173MB/s-338MB/s), io=22.0GiB (23.6GB), run=10021-10086msec 00:46:18.691 00:46:18.691 Disk stats (read/write): 00:46:18.691 nvme0n1: ios=14895/0, merge=0/0, ticks=1213209/0, in_queue=1213209, util=97.26% 00:46:18.691 nvme10n1: ios=14171/0, merge=0/0, ticks=1214386/0, in_queue=1214386, util=97.83% 00:46:18.691 nvme1n1: ios=16902/0, merge=0/0, ticks=1212939/0, in_queue=1212939, util=98.07% 00:46:18.691 nvme2n1: ios=13631/0, merge=0/0, ticks=1212040/0, in_queue=1212040, util=98.17% 00:46:18.691 nvme3n1: ios=12981/0, merge=0/0, ticks=1216983/0, in_queue=1216983, util=97.56% 00:46:18.691 nvme4n1: ios=14662/0, merge=0/0, ticks=1212749/0, in_queue=1212749, util=97.98% 00:46:18.691 nvme5n1: ios=17675/0, merge=0/0, ticks=1212987/0, in_queue=1212987, util=97.87% 00:46:18.691 nvme6n1: ios=24922/0, merge=0/0, ticks=1201097/0, in_queue=1201097, util=97.77% 00:46:18.691 nvme7n1: ios=17082/0, merge=0/0, ticks=1210137/0, in_queue=1210137, util=98.10% 00:46:18.691 nvme8n1: ios=14419/0, merge=0/0, ticks=1214633/0, in_queue=1214633, util=98.30% 00:46:18.691 nvme9n1: ios=14316/0, merge=0/0, ticks=1213433/0, in_queue=1213433, util=98.28% 00:46:18.691 14:59:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:46:18.691 [global] 00:46:18.691 thread=1 00:46:18.691 invalidate=1 00:46:18.691 rw=randwrite 00:46:18.691 time_based=1 00:46:18.691 runtime=10 00:46:18.691 ioengine=libaio 00:46:18.691 direct=1 00:46:18.691 bs=262144 00:46:18.691 iodepth=64 00:46:18.691 norandommap=1 00:46:18.691 numjobs=1 00:46:18.691 00:46:18.691 [job0] 00:46:18.691 filename=/dev/nvme0n1 00:46:18.691 [job1] 00:46:18.691 filename=/dev/nvme10n1 00:46:18.691 [job2] 00:46:18.691 filename=/dev/nvme1n1 00:46:18.691 [job3] 00:46:18.691 filename=/dev/nvme2n1 00:46:18.691 [job4] 00:46:18.691 filename=/dev/nvme3n1 00:46:18.691 [job5] 00:46:18.691 filename=/dev/nvme4n1 00:46:18.691 [job6] 00:46:18.691 filename=/dev/nvme5n1 00:46:18.691 [job7] 00:46:18.691 filename=/dev/nvme6n1 00:46:18.691 [job8] 00:46:18.691 filename=/dev/nvme7n1 00:46:18.691 [job9] 00:46:18.691 filename=/dev/nvme8n1 00:46:18.691 [job10] 00:46:18.691 filename=/dev/nvme9n1 00:46:18.691 Could not set queue depth (nvme0n1) 00:46:18.691 Could not set queue depth (nvme10n1) 00:46:18.691 Could not set queue depth (nvme1n1) 00:46:18.691 Could not set queue depth (nvme2n1) 00:46:18.691 Could not set queue depth (nvme3n1) 00:46:18.691 Could not set queue depth (nvme4n1) 00:46:18.691 Could not set queue depth (nvme5n1) 00:46:18.691 Could not set queue depth (nvme6n1) 00:46:18.691 Could not set queue depth (nvme7n1) 00:46:18.691 Could not set queue depth (nvme8n1) 00:46:18.691 Could not set queue depth (nvme9n1) 00:46:18.691 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:46:18.691 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:46:18.691 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:46:18.691 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:46:18.691 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:46:18.691 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:46:18.691 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:46:18.691 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:46:18.691 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:46:18.691 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:46:18.691 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:46:18.691 fio-3.35 00:46:18.691 Starting 11 threads 00:46:28.667 00:46:28.667 job0: (groupid=0, jobs=1): err= 0: pid=101693: Mon Jul 22 14:59:47 2024 00:46:28.667 write: IOPS=216, BW=54.1MiB/s (56.7MB/s)(554MiB/10240msec); 0 zone resets 00:46:28.667 slat (usec): min=18, max=70638, avg=4516.57, stdev=8817.26 00:46:28.667 clat (msec): min=20, max=569, avg=291.32, stdev=77.81 00:46:28.667 lat (msec): min=20, max=569, avg=295.83, stdev=78.51 00:46:28.667 clat percentiles (msec): 00:46:28.667 | 1.00th=[ 50], 5.00th=[ 176], 10.00th=[ 194], 20.00th=[ 224], 00:46:28.667 | 30.00th=[ 234], 40.00th=[ 300], 50.00th=[ 326], 60.00th=[ 334], 00:46:28.667 | 70.00th=[ 342], 80.00th=[ 351], 90.00th=[ 372], 95.00th=[ 376], 00:46:28.667 | 99.00th=[ 443], 99.50th=[ 506], 99.90th=[ 550], 99.95th=[ 567], 00:46:28.667 | 99.99th=[ 567] 00:46:28.667 bw ( KiB/s): min=43008, max=88240, per=4.50%, avg=55048.30, stdev=13429.38, samples=20 00:46:28.667 iops : min= 168, max= 344, avg=214.90, stdev=52.43, samples=20 00:46:28.667 lat (msec) : 50=1.04%, 100=1.08%, 250=35.41%, 500=61.83%, 750=0.63% 00:46:28.667 cpu : usr=0.60%, sys=0.94%, ctx=2562, majf=0, minf=1 00:46:28.667 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:46:28.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:28.667 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:46:28.668 issued rwts: total=0,2214,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:28.668 latency : target=0, window=0, percentile=100.00%, depth=64 00:46:28.668 job1: (groupid=0, jobs=1): err= 0: pid=101694: Mon Jul 22 14:59:47 2024 00:46:28.668 write: IOPS=209, BW=52.3MiB/s (54.8MB/s)(535MiB/10237msec); 0 zone resets 00:46:28.668 slat (usec): min=24, max=81097, avg=4670.93, stdev=9251.23 00:46:28.668 clat (msec): min=85, max=569, avg=301.04, stdev=72.22 00:46:28.668 lat (msec): min=85, max=569, avg=305.71, stdev=72.76 00:46:28.668 clat percentiles (msec): 00:46:28.668 | 1.00th=[ 142], 5.00th=[ 190], 10.00th=[ 205], 20.00th=[ 228], 00:46:28.668 | 30.00th=[ 239], 40.00th=[ 305], 50.00th=[ 334], 60.00th=[ 342], 00:46:28.668 | 70.00th=[ 355], 80.00th=[ 359], 90.00th=[ 368], 95.00th=[ 376], 00:46:28.668 | 99.00th=[ 464], 99.50th=[ 506], 99.90th=[ 550], 99.95th=[ 567], 00:46:28.668 | 99.99th=[ 567] 00:46:28.668 bw ( KiB/s): min=43008, max=78336, per=4.34%, avg=53145.20, stdev=11814.49, samples=20 00:46:28.668 iops : min= 168, max= 306, avg=207.45, stdev=46.19, samples=20 00:46:28.668 lat (msec) : 100=0.28%, 250=36.10%, 500=62.96%, 750=0.65% 00:46:28.668 cpu : usr=0.58%, sys=0.71%, ctx=2815, majf=0, minf=1 00:46:28.668 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:46:28.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:28.668 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:46:28.668 issued rwts: total=0,2141,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:28.668 latency : target=0, window=0, percentile=100.00%, depth=64 00:46:28.668 job2: (groupid=0, jobs=1): err= 0: pid=101701: Mon Jul 22 14:59:47 2024 00:46:28.668 write: IOPS=257, BW=64.3MiB/s (67.4MB/s)(659MiB/10241msec); 0 zone resets 00:46:28.668 slat (usec): min=20, max=76051, avg=3786.76, stdev=8179.43 00:46:28.668 clat (msec): min=5, max=591, avg=244.90, stdev=122.40 00:46:28.668 lat (msec): min=5, max=591, avg=248.68, stdev=124.03 00:46:28.668 clat percentiles (msec): 00:46:28.668 | 1.00th=[ 10], 5.00th=[ 63], 10.00th=[ 66], 20.00th=[ 68], 00:46:28.668 | 30.00th=[ 194], 40.00th=[ 232], 50.00th=[ 292], 60.00th=[ 326], 00:46:28.668 | 70.00th=[ 342], 80.00th=[ 351], 90.00th=[ 372], 95.00th=[ 376], 00:46:28.668 | 99.00th=[ 443], 99.50th=[ 527], 99.90th=[ 567], 99.95th=[ 592], 00:46:28.668 | 99.99th=[ 592] 00:46:28.668 bw ( KiB/s): min=42922, max=195072, per=5.37%, avg=65758.75, stdev=45025.59, samples=20 00:46:28.668 iops : min= 167, max= 762, avg=256.75, stdev=175.85, samples=20 00:46:28.668 lat (msec) : 10=1.25%, 20=1.06%, 50=1.06%, 100=20.16%, 250=24.37% 00:46:28.668 lat (msec) : 500=51.40%, 750=0.68% 00:46:28.668 cpu : usr=0.74%, sys=1.06%, ctx=2188, majf=0, minf=1 00:46:28.668 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:46:28.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:28.668 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:46:28.668 issued rwts: total=0,2634,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:28.668 latency : target=0, window=0, percentile=100.00%, depth=64 00:46:28.668 job3: (groupid=0, jobs=1): err= 0: pid=101707: Mon Jul 22 14:59:47 2024 00:46:28.668 write: IOPS=245, BW=61.4MiB/s (64.3MB/s)(628MiB/10230msec); 0 zone resets 00:46:28.668 slat (usec): min=14, max=48494, avg=3938.47, stdev=7527.08 00:46:28.668 clat (msec): min=6, max=561, avg=256.68, stdev=94.16 00:46:28.668 lat (msec): min=6, max=561, avg=260.62, stdev=95.42 00:46:28.668 clat percentiles (msec): 00:46:28.668 | 1.00th=[ 27], 5.00th=[ 72], 10.00th=[ 146], 20.00th=[ 178], 00:46:28.668 | 30.00th=[ 190], 40.00th=[ 201], 50.00th=[ 313], 60.00th=[ 321], 00:46:28.668 | 70.00th=[ 330], 80.00th=[ 334], 90.00th=[ 342], 95.00th=[ 347], 00:46:28.668 | 99.00th=[ 435], 99.50th=[ 498], 99.90th=[ 542], 99.95th=[ 558], 00:46:28.668 | 99.99th=[ 558] 00:46:28.668 bw ( KiB/s): min=47104, max=151552, per=5.12%, avg=62663.45, stdev=26311.37, samples=20 00:46:28.668 iops : min= 184, max= 592, avg=244.75, stdev=102.79, samples=20 00:46:28.668 lat (msec) : 10=0.16%, 20=0.32%, 50=2.23%, 100=4.22%, 250=36.12% 00:46:28.668 lat (msec) : 500=56.55%, 750=0.40% 00:46:28.668 cpu : usr=0.64%, sys=0.90%, ctx=3525, majf=0, minf=1 00:46:28.668 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:46:28.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:28.668 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:46:28.668 issued rwts: total=0,2511,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:28.668 latency : target=0, window=0, percentile=100.00%, depth=64 00:46:28.668 job4: (groupid=0, jobs=1): err= 0: pid=101708: Mon Jul 22 14:59:47 2024 00:46:28.668 write: IOPS=720, BW=180MiB/s (189MB/s)(1816MiB/10078msec); 0 zone resets 00:46:28.668 slat (usec): min=18, max=19659, avg=1350.50, stdev=2291.29 00:46:28.668 clat (usec): min=1414, max=164110, avg=87402.50, stdev=10627.83 00:46:28.668 lat (usec): min=1486, max=165026, avg=88753.00, stdev=10623.49 00:46:28.668 clat percentiles (msec): 00:46:28.668 | 1.00th=[ 64], 5.00th=[ 79], 10.00th=[ 80], 20.00th=[ 83], 00:46:28.668 | 30.00th=[ 84], 40.00th=[ 85], 50.00th=[ 86], 60.00th=[ 88], 00:46:28.668 | 70.00th=[ 91], 80.00th=[ 94], 90.00th=[ 97], 95.00th=[ 101], 00:46:28.668 | 99.00th=[ 123], 99.50th=[ 144], 99.90th=[ 159], 99.95th=[ 161], 00:46:28.668 | 99.99th=[ 165] 00:46:28.668 bw ( KiB/s): min=161792, max=196215, per=15.06%, avg=184332.45, stdev=11695.46, samples=20 00:46:28.668 iops : min= 632, max= 766, avg=719.95, stdev=45.60, samples=20 00:46:28.668 lat (msec) : 2=0.04%, 4=0.08%, 10=0.21%, 50=0.28%, 100=93.92% 00:46:28.668 lat (msec) : 250=5.48% 00:46:28.668 cpu : usr=1.81%, sys=2.59%, ctx=9389, majf=0, minf=1 00:46:28.668 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:46:28.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:28.668 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:46:28.668 issued rwts: total=0,7265,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:28.668 latency : target=0, window=0, percentile=100.00%, depth=64 00:46:28.668 job5: (groupid=0, jobs=1): err= 0: pid=101709: Mon Jul 22 14:59:47 2024 00:46:28.668 write: IOPS=1495, BW=374MiB/s (392MB/s)(3754MiB/10040msec); 0 zone resets 00:46:28.668 slat (usec): min=15, max=25658, avg=659.72, stdev=1139.99 00:46:28.668 clat (msec): min=7, max=133, avg=42.12, stdev= 9.05 00:46:28.668 lat (msec): min=7, max=133, avg=42.78, stdev= 9.19 00:46:28.668 clat percentiles (msec): 00:46:28.668 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 36], 00:46:28.668 | 30.00th=[ 41], 40.00th=[ 42], 50.00th=[ 42], 60.00th=[ 43], 00:46:28.668 | 70.00th=[ 44], 80.00th=[ 45], 90.00th=[ 48], 95.00th=[ 51], 00:46:28.668 | 99.00th=[ 70], 99.50th=[ 104], 99.90th=[ 131], 99.95th=[ 132], 00:46:28.668 | 99.99th=[ 133] 00:46:28.668 bw ( KiB/s): min=195584, max=473088, per=31.27%, avg=382785.95, stdev=58277.03, samples=20 00:46:28.668 iops : min= 764, max= 1848, avg=1495.25, stdev=227.65, samples=20 00:46:28.668 lat (msec) : 10=0.07%, 20=0.35%, 50=94.31%, 100=4.71%, 250=0.57% 00:46:28.668 cpu : usr=3.29%, sys=3.96%, ctx=21367, majf=0, minf=1 00:46:28.668 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:46:28.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:28.668 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:46:28.668 issued rwts: total=0,15017,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:28.668 latency : target=0, window=0, percentile=100.00%, depth=64 00:46:28.668 job6: (groupid=0, jobs=1): err= 0: pid=101710: Mon Jul 22 14:59:47 2024 00:46:28.668 write: IOPS=232, BW=58.2MiB/s (61.0MB/s)(596MiB/10237msec); 0 zone resets 00:46:28.668 slat (usec): min=21, max=47074, avg=4195.22, stdev=7807.31 00:46:28.668 clat (msec): min=16, max=581, avg=270.72, stdev=76.49 00:46:28.668 lat (msec): min=16, max=581, avg=274.91, stdev=77.28 00:46:28.668 clat percentiles (msec): 00:46:28.668 | 1.00th=[ 54], 5.00th=[ 174], 10.00th=[ 182], 20.00th=[ 192], 00:46:28.668 | 30.00th=[ 201], 40.00th=[ 264], 50.00th=[ 309], 60.00th=[ 321], 00:46:28.668 | 70.00th=[ 326], 80.00th=[ 330], 90.00th=[ 342], 95.00th=[ 351], 00:46:28.668 | 99.00th=[ 447], 99.50th=[ 510], 99.90th=[ 550], 99.95th=[ 584], 00:46:28.668 | 99.99th=[ 584] 00:46:28.668 bw ( KiB/s): min=47104, max=88576, per=4.85%, avg=59365.90, stdev=15625.73, samples=20 00:46:28.668 iops : min= 184, max= 346, avg=231.80, stdev=61.10, samples=20 00:46:28.668 lat (msec) : 20=0.04%, 50=0.84%, 100=1.01%, 250=37.45%, 500=60.08% 00:46:28.668 lat (msec) : 750=0.59% 00:46:28.668 cpu : usr=0.72%, sys=0.93%, ctx=1843, majf=0, minf=1 00:46:28.668 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:46:28.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:28.668 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:46:28.669 issued rwts: total=0,2382,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:28.669 latency : target=0, window=0, percentile=100.00%, depth=64 00:46:28.669 job7: (groupid=0, jobs=1): err= 0: pid=101711: Mon Jul 22 14:59:47 2024 00:46:28.669 write: IOPS=211, BW=52.8MiB/s (55.4MB/s)(540MiB/10229msec); 0 zone resets 00:46:28.669 slat (usec): min=17, max=61127, avg=4623.40, stdev=9068.54 00:46:28.669 clat (msec): min=48, max=570, avg=298.13, stdev=77.59 00:46:28.669 lat (msec): min=48, max=570, avg=302.75, stdev=78.25 00:46:28.669 clat percentiles (msec): 00:46:28.669 | 1.00th=[ 100], 5.00th=[ 180], 10.00th=[ 192], 20.00th=[ 224], 00:46:28.669 | 30.00th=[ 236], 40.00th=[ 300], 50.00th=[ 330], 60.00th=[ 347], 00:46:28.669 | 70.00th=[ 355], 80.00th=[ 368], 90.00th=[ 372], 95.00th=[ 376], 00:46:28.669 | 99.00th=[ 464], 99.50th=[ 506], 99.90th=[ 550], 99.95th=[ 575], 00:46:28.669 | 99.99th=[ 575] 00:46:28.669 bw ( KiB/s): min=43520, max=77824, per=4.39%, avg=53711.50, stdev=12803.84, samples=20 00:46:28.669 iops : min= 170, max= 304, avg=209.70, stdev=50.00, samples=20 00:46:28.669 lat (msec) : 50=0.05%, 100=1.11%, 250=36.42%, 500=61.78%, 750=0.65% 00:46:28.669 cpu : usr=0.59%, sys=0.71%, ctx=2913, majf=0, minf=1 00:46:28.669 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:46:28.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:28.669 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:46:28.669 issued rwts: total=0,2161,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:28.669 latency : target=0, window=0, percentile=100.00%, depth=64 00:46:28.669 job8: (groupid=0, jobs=1): err= 0: pid=101712: Mon Jul 22 14:59:47 2024 00:46:28.669 write: IOPS=318, BW=79.5MiB/s (83.4MB/s)(814MiB/10242msec); 0 zone resets 00:46:28.669 slat (usec): min=22, max=140941, avg=3010.37, stdev=7256.55 00:46:28.669 clat (usec): min=1705, max=584737, avg=198094.89, stdev=129456.32 00:46:28.669 lat (msec): min=5, max=584, avg=201.11, stdev=131.26 00:46:28.669 clat percentiles (msec): 00:46:28.669 | 1.00th=[ 17], 5.00th=[ 78], 10.00th=[ 91], 20.00th=[ 94], 00:46:28.669 | 30.00th=[ 96], 40.00th=[ 99], 50.00th=[ 101], 60.00th=[ 296], 00:46:28.669 | 70.00th=[ 330], 80.00th=[ 347], 90.00th=[ 368], 95.00th=[ 380], 00:46:28.669 | 99.00th=[ 435], 99.50th=[ 498], 99.90th=[ 567], 99.95th=[ 584], 00:46:28.669 | 99.99th=[ 584] 00:46:28.669 bw ( KiB/s): min=42496, max=187254, per=6.68%, avg=81771.30, stdev=56467.82, samples=20 00:46:28.669 iops : min= 166, max= 731, avg=319.30, stdev=220.60, samples=20 00:46:28.669 lat (msec) : 2=0.03%, 4=0.03%, 10=0.49%, 20=0.61%, 50=0.49% 00:46:28.669 lat (msec) : 100=47.99%, 250=9.58%, 500=40.34%, 750=0.43% 00:46:28.669 cpu : usr=0.97%, sys=1.08%, ctx=4295, majf=0, minf=1 00:46:28.669 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:46:28.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:28.669 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:46:28.669 issued rwts: total=0,3257,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:28.669 latency : target=0, window=0, percentile=100.00%, depth=64 00:46:28.669 job9: (groupid=0, jobs=1): err= 0: pid=101713: Mon Jul 22 14:59:47 2024 00:46:28.669 write: IOPS=208, BW=52.0MiB/s (54.5MB/s)(533MiB/10242msec); 0 zone resets 00:46:28.669 slat (usec): min=20, max=216841, avg=4633.61, stdev=11106.16 00:46:28.669 clat (msec): min=2, max=643, avg=302.79, stdev=115.33 00:46:28.669 lat (msec): min=3, max=643, avg=307.42, stdev=116.76 00:46:28.669 clat percentiles (msec): 00:46:28.669 | 1.00th=[ 32], 5.00th=[ 121], 10.00th=[ 165], 20.00th=[ 197], 00:46:28.669 | 30.00th=[ 224], 40.00th=[ 232], 50.00th=[ 338], 60.00th=[ 368], 00:46:28.669 | 70.00th=[ 384], 80.00th=[ 388], 90.00th=[ 430], 95.00th=[ 460], 00:46:28.669 | 99.00th=[ 558], 99.50th=[ 575], 99.90th=[ 625], 99.95th=[ 642], 00:46:28.669 | 99.99th=[ 642] 00:46:28.669 bw ( KiB/s): min=31169, max=109860, per=4.32%, avg=52904.95, stdev=20206.25, samples=20 00:46:28.669 iops : min= 121, max= 429, avg=206.50, stdev=79.02, samples=20 00:46:28.669 lat (msec) : 4=0.09%, 10=0.52%, 20=0.19%, 50=0.80%, 100=2.06% 00:46:28.669 lat (msec) : 250=38.90%, 500=53.73%, 750=3.71% 00:46:28.669 cpu : usr=0.62%, sys=0.87%, ctx=2807, majf=0, minf=1 00:46:28.669 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:46:28.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:28.669 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:46:28.669 issued rwts: total=0,2131,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:28.669 latency : target=0, window=0, percentile=100.00%, depth=64 00:46:28.669 job10: (groupid=0, jobs=1): err= 0: pid=101714: Mon Jul 22 14:59:47 2024 00:46:28.669 write: IOPS=720, BW=180MiB/s (189MB/s)(1815MiB/10081msec); 0 zone resets 00:46:28.669 slat (usec): min=19, max=25030, avg=1358.75, stdev=2307.60 00:46:28.669 clat (msec): min=2, max=162, avg=87.44, stdev=10.11 00:46:28.669 lat (msec): min=2, max=162, avg=88.80, stdev=10.17 00:46:28.669 clat percentiles (msec): 00:46:28.669 | 1.00th=[ 74], 5.00th=[ 80], 10.00th=[ 81], 20.00th=[ 82], 00:46:28.669 | 30.00th=[ 84], 40.00th=[ 85], 50.00th=[ 86], 60.00th=[ 88], 00:46:28.669 | 70.00th=[ 91], 80.00th=[ 94], 90.00th=[ 97], 95.00th=[ 101], 00:46:28.669 | 99.00th=[ 120], 99.50th=[ 132], 99.90th=[ 153], 99.95th=[ 157], 00:46:28.669 | 99.99th=[ 163] 00:46:28.669 bw ( KiB/s): min=167936, max=196608, per=15.05%, avg=184185.25, stdev=11340.16, samples=20 00:46:28.669 iops : min= 656, max= 768, avg=719.40, stdev=44.24, samples=20 00:46:28.669 lat (msec) : 4=0.03%, 10=0.18%, 20=0.25%, 50=0.22%, 100=94.39% 00:46:28.669 lat (msec) : 250=4.93% 00:46:28.669 cpu : usr=1.59%, sys=2.18%, ctx=11341, majf=0, minf=1 00:46:28.669 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:46:28.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:46:28.669 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:46:28.669 issued rwts: total=0,7260,0,0 short=0,0,0,0 dropped=0,0,0,0 00:46:28.669 latency : target=0, window=0, percentile=100.00%, depth=64 00:46:28.669 00:46:28.669 Run status group 0 (all jobs): 00:46:28.669 WRITE: bw=1195MiB/s (1253MB/s), 52.0MiB/s-374MiB/s (54.5MB/s-392MB/s), io=12.0GiB (12.8GB), run=10040-10242msec 00:46:28.669 00:46:28.669 Disk stats (read/write): 00:46:28.669 nvme0n1: ios=50/4317, merge=0/0, ticks=34/1206150, in_queue=1206184, util=98.34% 00:46:28.669 nvme10n1: ios=49/4168, merge=0/0, ticks=37/1204775, in_queue=1204812, util=98.29% 00:46:28.669 nvme1n1: ios=48/5151, merge=0/0, ticks=29/1202178, in_queue=1202207, util=98.47% 00:46:28.669 nvme2n1: ios=45/4908, merge=0/0, ticks=35/1206032, in_queue=1206067, util=98.42% 00:46:28.669 nvme3n1: ios=29/14465, merge=0/0, ticks=23/1222883, in_queue=1222906, util=98.55% 00:46:28.669 nvme4n1: ios=27/29999, merge=0/0, ticks=22/1223553, in_queue=1223575, util=98.51% 00:46:28.669 nvme5n1: ios=0/4657, merge=0/0, ticks=0/1207974, in_queue=1207974, util=98.60% 00:46:28.669 nvme6n1: ios=0/4213, merge=0/0, ticks=0/1204561, in_queue=1204561, util=98.52% 00:46:28.669 nvme7n1: ios=0/6410, merge=0/0, ticks=0/1208688, in_queue=1208688, util=98.85% 00:46:28.669 nvme8n1: ios=0/4157, merge=0/0, ticks=0/1206635, in_queue=1206635, util=98.89% 00:46:28.669 nvme9n1: ios=0/14464, merge=0/0, ticks=0/1223277, in_queue=1223277, util=98.97% 00:46:28.669 14:59:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:46:28.669 14:59:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:46:28.669 14:59:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:46:28.669 14:59:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:46:28.669 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:46:28.669 14:59:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:46:28.669 14:59:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:46:28.669 14:59:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:46:28.669 14:59:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:46:28.669 14:59:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:46:28.669 14:59:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK1 00:46:28.669 14:59:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:46:28.669 14:59:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:46:28.669 14:59:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:28.669 14:59:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:46:28.669 14:59:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:28.669 14:59:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:46:28.669 14:59:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:46:28.669 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:46:28.669 14:59:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:46:28.669 14:59:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:46:28.669 14:59:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:46:28.669 14:59:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:46:28.669 14:59:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK2 00:46:28.669 14:59:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:46:28.669 14:59:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:46:28.669 14:59:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:46:28.669 14:59:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:28.670 14:59:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:46:28.670 14:59:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:28.670 14:59:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:46:28.670 14:59:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:46:28.670 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:46:28.670 14:59:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:46:28.670 14:59:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:46:28.670 14:59:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:46:28.670 14:59:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:46:28.670 14:59:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:46:28.670 14:59:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK3 00:46:28.670 14:59:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:46:28.670 14:59:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:46:28.670 14:59:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:28.670 14:59:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:46:28.670 14:59:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:28.670 14:59:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:46:28.670 14:59:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:46:28.670 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:46:28.670 14:59:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:46:28.670 14:59:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:46:28.670 14:59:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:46:28.670 14:59:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:46:28.670 14:59:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:46:28.670 14:59:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK4 00:46:28.670 14:59:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:46:28.670 14:59:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:46:28.670 14:59:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:28.670 14:59:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:46:28.670 14:59:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:28.670 14:59:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:46:28.670 14:59:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:46:28.670 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:46:28.670 14:59:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:46:28.670 14:59:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:46:28.670 14:59:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:46:28.670 14:59:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:46:28.670 14:59:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:46:28.670 14:59:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK5 00:46:28.670 14:59:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:46:28.670 14:59:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:46:28.670 14:59:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:28.670 14:59:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:46:28.670 14:59:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:28.670 14:59:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:46:28.670 14:59:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:46:28.670 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:46:28.670 14:59:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:46:28.670 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:46:28.670 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:46:28.670 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:46:28.670 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:46:28.670 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK6 00:46:28.670 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:46:28.670 14:59:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:46:28.670 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:28.670 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:46:28.670 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:28.670 14:59:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:46:28.670 14:59:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:46:28.670 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:46:28.670 14:59:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:46:28.670 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:46:28.670 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:46:28.670 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:46:28.670 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:46:28.670 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK7 00:46:28.670 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:46:28.670 14:59:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:46:28.670 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:28.670 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:46:28.670 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:28.670 14:59:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:46:28.670 14:59:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:46:28.670 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:46:28.670 14:59:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:46:28.670 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:46:28.670 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:46:28.670 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:46:28.929 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:46:28.929 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK8 00:46:28.929 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:46:28.929 14:59:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:46:28.929 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:28.929 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:46:28.929 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:28.929 14:59:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:46:28.930 14:59:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:46:28.930 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:46:28.930 14:59:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:46:28.930 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:46:28.930 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:46:28.930 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:46:28.930 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:46:28.930 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK9 00:46:28.930 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:46:28.930 14:59:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:46:28.930 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:28.930 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:46:28.930 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:28.930 14:59:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:46:28.930 14:59:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:46:28.930 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:46:28.930 14:59:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:46:28.930 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:46:28.930 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:46:28.930 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:46:28.930 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:46:28.930 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK10 00:46:28.930 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:46:28.930 14:59:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:46:28.930 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:28.930 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:46:28.930 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:28.930 14:59:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:46:28.930 14:59:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:46:29.188 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:46:29.188 14:59:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:46:29.188 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:46:29.188 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:46:29.188 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:46:29.188 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:46:29.188 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK11 00:46:29.188 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:46:29.188 14:59:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:46:29.188 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:29.188 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:46:29.188 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:29.188 14:59:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:46:29.188 14:59:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:46:29.188 14:59:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:46:29.188 14:59:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:46:29.188 14:59:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:46:29.188 14:59:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:46:29.188 14:59:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:46:29.188 14:59:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:46:29.188 14:59:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:46:29.188 rmmod nvme_tcp 00:46:29.188 rmmod nvme_fabrics 00:46:29.188 rmmod nvme_keyring 00:46:29.188 14:59:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:46:29.188 14:59:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:46:29.188 14:59:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:46:29.188 14:59:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 101002 ']' 00:46:29.188 14:59:48 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 101002 00:46:29.188 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@946 -- # '[' -z 101002 ']' 00:46:29.188 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@950 -- # kill -0 101002 00:46:29.188 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # uname 00:46:29.188 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:46:29.188 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 101002 00:46:29.188 killing process with pid 101002 00:46:29.188 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:46:29.188 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:46:29.188 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@964 -- # echo 'killing process with pid 101002' 00:46:29.188 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@965 -- # kill 101002 00:46:29.188 14:59:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@970 -- # wait 101002 00:46:29.755 14:59:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:46:29.755 14:59:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:46:29.755 14:59:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:46:29.755 14:59:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:46:29.755 14:59:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:46:29.755 14:59:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:29.755 14:59:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:46:29.755 14:59:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:29.755 14:59:49 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:46:29.755 00:46:29.755 real 0m49.990s 00:46:29.755 user 2m54.826s 00:46:29.755 sys 0m22.222s 00:46:29.755 14:59:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1122 -- # xtrace_disable 00:46:29.755 14:59:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:46:29.755 ************************************ 00:46:29.755 END TEST nvmf_multiconnection 00:46:29.755 ************************************ 00:46:29.755 14:59:49 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:46:29.755 14:59:49 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:46:29.755 14:59:49 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:46:29.755 14:59:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:46:29.755 ************************************ 00:46:29.755 START TEST nvmf_initiator_timeout 00:46:29.755 ************************************ 00:46:29.755 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:46:30.014 * Looking for test storage... 00:46:30.014 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:46:30.014 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:46:30.014 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:46:30.014 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:30.014 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:30.014 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:30.014 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:30.014 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:30.014 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:30.014 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:30.014 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:30.014 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:30.014 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:30.014 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:46:30.014 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:46:30.014 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:30.014 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:30.014 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:46:30.014 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:30.014 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:46:30.014 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:30.014 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:30.014 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:30.014 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:30.014 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:30.014 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:46:30.015 Cannot find device "nvmf_tgt_br" 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # true 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:46:30.015 Cannot find device "nvmf_tgt_br2" 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # true 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:46:30.015 Cannot find device "nvmf_tgt_br" 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # true 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:46:30.015 Cannot find device "nvmf_tgt_br2" 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # true 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:46:30.015 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:46:30.015 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:46:30.015 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:46:30.274 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:46:30.274 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:46:30.274 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:46:30.274 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:46:30.274 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:46:30.274 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:46:30.274 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:46:30.274 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:46:30.274 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:46:30.274 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:46:30.274 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:46:30.274 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:46:30.274 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:46:30.274 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:46:30.274 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:46:30.274 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:46:30.274 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:46:30.274 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:46:30.274 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:46:30.274 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:46:30.274 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:46:30.274 00:46:30.274 --- 10.0.0.2 ping statistics --- 00:46:30.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:30.274 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:46:30.274 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:46:30.274 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:46:30.274 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:46:30.274 00:46:30.274 --- 10.0.0.3 ping statistics --- 00:46:30.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:30.274 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:46:30.274 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:46:30.274 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:46:30.274 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:46:30.274 00:46:30.274 --- 10.0.0.1 ping statistics --- 00:46:30.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:46:30.274 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:46:30.274 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:46:30.274 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@433 -- # return 0 00:46:30.274 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:46:30.274 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:46:30.274 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:46:30.274 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:46:30.274 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:46:30.274 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:46:30.274 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:46:30.274 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:46:30.274 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:46:30.274 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@720 -- # xtrace_disable 00:46:30.274 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:46:30.274 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=102072 00:46:30.274 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 102072 00:46:30.274 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@827 -- # '[' -z 102072 ']' 00:46:30.274 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:30.274 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:46:30.274 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:30.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:30.274 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:46:30.274 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:46:30.274 14:59:49 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:46:30.274 [2024-07-22 14:59:49.796650] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:46:30.274 [2024-07-22 14:59:49.796718] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:30.539 [2024-07-22 14:59:49.939541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:46:30.539 [2024-07-22 14:59:49.983775] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:30.539 [2024-07-22 14:59:49.983842] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:30.539 [2024-07-22 14:59:49.983848] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:30.539 [2024-07-22 14:59:49.983852] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:30.539 [2024-07-22 14:59:49.983856] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:30.539 [2024-07-22 14:59:49.984048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:46:30.539 [2024-07-22 14:59:49.984409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:46:30.539 [2024-07-22 14:59:49.984472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:46:30.539 [2024-07-22 14:59:49.984477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:46:31.115 14:59:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:46:31.115 14:59:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # return 0 00:46:31.115 14:59:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:46:31.115 14:59:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:46:31.115 14:59:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:46:31.115 14:59:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:31.115 14:59:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:46:31.115 14:59:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:46:31.115 14:59:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:31.115 14:59:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:46:31.115 Malloc0 00:46:31.115 14:59:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:31.115 14:59:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:46:31.115 14:59:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:31.115 14:59:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:46:31.115 Delay0 00:46:31.115 14:59:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:31.115 14:59:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:46:31.115 14:59:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:31.115 14:59:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:46:31.115 [2024-07-22 14:59:50.733823] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:31.115 14:59:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:31.115 14:59:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:46:31.115 14:59:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:31.115 14:59:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:46:31.374 14:59:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:31.374 14:59:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:46:31.374 14:59:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:31.374 14:59:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:46:31.374 14:59:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:31.374 14:59:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:46:31.374 14:59:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:31.374 14:59:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:46:31.374 [2024-07-22 14:59:50.773932] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:31.374 14:59:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:31.374 14:59:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:46:31.374 14:59:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:46:31.374 14:59:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1194 -- # local i=0 00:46:31.375 14:59:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:46:31.375 14:59:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:46:31.375 14:59:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1201 -- # sleep 2 00:46:33.907 14:59:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:46:33.907 14:59:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:46:33.907 14:59:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:46:33.907 14:59:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:46:33.907 14:59:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:46:33.907 14:59:52 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # return 0 00:46:33.907 14:59:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=102167 00:46:33.907 14:59:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:46:33.907 14:59:52 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:46:33.907 [global] 00:46:33.907 thread=1 00:46:33.907 invalidate=1 00:46:33.907 rw=write 00:46:33.907 time_based=1 00:46:33.907 runtime=60 00:46:33.907 ioengine=libaio 00:46:33.907 direct=1 00:46:33.907 bs=4096 00:46:33.907 iodepth=1 00:46:33.907 norandommap=0 00:46:33.907 numjobs=1 00:46:33.907 00:46:33.907 verify_dump=1 00:46:33.907 verify_backlog=512 00:46:33.907 verify_state_save=0 00:46:33.907 do_verify=1 00:46:33.907 verify=crc32c-intel 00:46:33.907 [job0] 00:46:33.907 filename=/dev/nvme0n1 00:46:33.907 Could not set queue depth (nvme0n1) 00:46:33.907 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:46:33.907 fio-3.35 00:46:33.907 Starting 1 thread 00:46:36.441 14:59:55 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:46:36.441 14:59:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:36.441 14:59:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:46:36.441 true 00:46:36.441 14:59:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:36.441 14:59:55 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:46:36.441 14:59:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:36.441 14:59:55 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:46:36.441 true 00:46:36.441 14:59:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:36.441 14:59:56 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:46:36.441 14:59:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:36.441 14:59:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:46:36.441 true 00:46:36.441 14:59:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:36.441 14:59:56 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:46:36.441 14:59:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:36.441 14:59:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:46:36.441 true 00:46:36.441 14:59:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:36.441 14:59:56 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:46:39.730 14:59:59 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:46:39.730 14:59:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:39.730 14:59:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:46:39.730 true 00:46:39.730 14:59:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:39.730 14:59:59 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:46:39.730 14:59:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:39.730 14:59:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:46:39.730 true 00:46:39.730 14:59:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:39.730 14:59:59 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:46:39.730 14:59:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:39.730 14:59:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:46:39.730 true 00:46:39.730 14:59:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:39.730 14:59:59 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:46:39.730 14:59:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:39.730 14:59:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:46:39.730 true 00:46:39.730 14:59:59 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:39.730 14:59:59 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:46:39.730 14:59:59 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 102167 00:47:35.953 00:47:35.953 job0: (groupid=0, jobs=1): err= 0: pid=102188: Mon Jul 22 15:00:53 2024 00:47:35.953 read: IOPS=981, BW=3925KiB/s (4020kB/s)(230MiB/60000msec) 00:47:35.953 slat (nsec): min=6426, max=84802, avg=7852.99, stdev=2243.68 00:47:35.953 clat (usec): min=117, max=685, avg=172.36, stdev=20.72 00:47:35.953 lat (usec): min=124, max=695, avg=180.21, stdev=21.31 00:47:35.953 clat percentiles (usec): 00:47:35.953 | 1.00th=[ 130], 5.00th=[ 137], 10.00th=[ 143], 20.00th=[ 157], 00:47:35.953 | 30.00th=[ 163], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 178], 00:47:35.953 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 196], 95.00th=[ 204], 00:47:35.953 | 99.00th=[ 223], 99.50th=[ 229], 99.90th=[ 243], 99.95th=[ 262], 00:47:35.953 | 99.99th=[ 529] 00:47:35.953 write: IOPS=983, BW=3935KiB/s (4029kB/s)(231MiB/60000msec); 0 zone resets 00:47:35.953 slat (usec): min=8, max=9156, avg=12.52, stdev=50.01 00:47:35.953 clat (usec): min=2, max=40585k, avg=822.45, stdev=167060.88 00:47:35.953 lat (usec): min=106, max=40585k, avg=834.97, stdev=167060.89 00:47:35.953 clat percentiles (usec): 00:47:35.953 | 1.00th=[ 104], 5.00th=[ 111], 10.00th=[ 115], 20.00th=[ 121], 00:47:35.953 | 30.00th=[ 126], 40.00th=[ 130], 50.00th=[ 135], 60.00th=[ 139], 00:47:35.953 | 70.00th=[ 143], 80.00th=[ 147], 90.00th=[ 155], 95.00th=[ 163], 00:47:35.953 | 99.00th=[ 180], 99.50th=[ 190], 99.90th=[ 255], 99.95th=[ 347], 00:47:35.953 | 99.99th=[ 1074] 00:47:35.953 bw ( KiB/s): min= 5720, max=16384, per=100.00%, avg=11867.90, stdev=1960.12, samples=39 00:47:35.953 iops : min= 1430, max= 4096, avg=2966.97, stdev=490.03, samples=39 00:47:35.953 lat (usec) : 4=0.01%, 20=0.01%, 50=0.01%, 100=0.05%, 250=99.85% 00:47:35.953 lat (usec) : 500=0.07%, 750=0.01%, 1000=0.01% 00:47:35.953 lat (msec) : 2=0.01%, 10=0.01%, >=2000=0.01% 00:47:35.953 cpu : usr=0.34%, sys=1.44%, ctx=117931, majf=0, minf=2 00:47:35.953 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:47:35.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:35.953 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:47:35.953 issued rwts: total=58880,59018,0,0 short=0,0,0,0 dropped=0,0,0,0 00:47:35.954 latency : target=0, window=0, percentile=100.00%, depth=1 00:47:35.954 00:47:35.954 Run status group 0 (all jobs): 00:47:35.954 READ: bw=3925KiB/s (4020kB/s), 3925KiB/s-3925KiB/s (4020kB/s-4020kB/s), io=230MiB (241MB), run=60000-60000msec 00:47:35.954 WRITE: bw=3935KiB/s (4029kB/s), 3935KiB/s-3935KiB/s (4029kB/s-4029kB/s), io=231MiB (242MB), run=60000-60000msec 00:47:35.954 00:47:35.954 Disk stats (read/write): 00:47:35.954 nvme0n1: ios=58761/58880, merge=0/0, ticks=10487/8272, in_queue=18759, util=99.63% 00:47:35.954 15:00:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:47:35.954 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:47:35.954 15:00:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:47:35.954 15:00:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1215 -- # local i=0 00:47:35.954 15:00:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:47:35.954 15:00:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:47:35.954 15:00:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:47:35.954 15:00:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:47:35.954 15:00:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # return 0 00:47:35.954 15:00:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:47:35.954 15:00:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:47:35.954 nvmf hotplug test: fio successful as expected 00:47:35.954 15:00:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:47:35.954 15:00:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:35.954 15:00:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:47:35.954 15:00:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:35.954 15:00:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:47:35.954 15:00:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:47:35.954 15:00:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:47:35.954 15:00:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:47:35.954 15:00:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:47:35.954 15:00:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:47:35.954 15:00:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:47:35.954 15:00:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:47:35.954 15:00:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:47:35.954 rmmod nvme_tcp 00:47:35.954 rmmod nvme_fabrics 00:47:35.954 rmmod nvme_keyring 00:47:35.954 15:00:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:47:35.954 15:00:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:47:35.954 15:00:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:47:35.954 15:00:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 102072 ']' 00:47:35.954 15:00:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 102072 00:47:35.954 15:00:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@946 -- # '[' -z 102072 ']' 00:47:35.954 15:00:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # kill -0 102072 00:47:35.954 15:00:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # uname 00:47:35.954 15:00:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:47:35.954 15:00:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 102072 00:47:35.954 15:00:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:47:35.954 15:00:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:47:35.954 15:00:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 102072' 00:47:35.954 killing process with pid 102072 00:47:35.954 15:00:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@965 -- # kill 102072 00:47:35.954 15:00:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@970 -- # wait 102072 00:47:35.954 15:00:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:47:35.954 15:00:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:47:35.954 15:00:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:47:35.954 15:00:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:47:35.954 15:00:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:47:35.954 15:00:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:35.954 15:00:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:47:35.954 15:00:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:35.954 15:00:53 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:47:35.954 00:47:35.954 real 1m4.716s 00:47:35.954 user 4m10.227s 00:47:35.954 sys 0m5.579s 00:47:35.954 15:00:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1122 -- # xtrace_disable 00:47:35.954 15:00:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:47:35.954 ************************************ 00:47:35.954 END TEST nvmf_initiator_timeout 00:47:35.954 ************************************ 00:47:35.954 15:00:54 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 00:47:35.954 15:00:54 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:47:35.954 15:00:54 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:47:35.954 15:00:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:35.954 15:00:54 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:47:35.954 15:00:54 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:47:35.954 15:00:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:35.954 15:00:54 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:47:35.954 15:00:54 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:47:35.954 15:00:54 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:47:35.954 15:00:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:47:35.954 15:00:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:35.954 ************************************ 00:47:35.954 START TEST nvmf_multicontroller 00:47:35.954 ************************************ 00:47:35.954 15:00:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:47:35.954 * Looking for test storage... 00:47:35.954 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:47:35.954 15:00:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:47:35.954 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:47:35.954 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:35.954 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:35.954 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:35.954 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:35.954 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:35.954 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:35.954 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:35.954 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:35.954 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:35.954 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:35.954 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:47:35.954 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:47:35.954 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:35.954 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:35.954 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:47:35.954 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:35.954 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:47:35.954 15:00:54 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:35.954 15:00:54 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:35.954 15:00:54 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:35.954 15:00:54 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:35.954 15:00:54 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:35.954 15:00:54 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:35.954 15:00:54 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@432 -- # nvmf_veth_init 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:47:35.955 Cannot find device "nvmf_tgt_br" 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # true 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:47:35.955 Cannot find device "nvmf_tgt_br2" 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # true 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:47:35.955 Cannot find device "nvmf_tgt_br" 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # true 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:47:35.955 Cannot find device "nvmf_tgt_br2" 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # true 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:47:35.955 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:47:35.955 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:47:35.955 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:47:35.955 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:47:35.955 00:47:35.955 --- 10.0.0.2 ping statistics --- 00:47:35.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:35.955 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:47:35.955 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:47:35.955 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:47:35.955 00:47:35.955 --- 10.0.0.3 ping statistics --- 00:47:35.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:35.955 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:47:35.955 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:47:35.955 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:47:35.955 00:47:35.955 --- 10.0.0.1 ping statistics --- 00:47:35.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:35.955 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@433 -- # return 0 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@720 -- # xtrace_disable 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=103026 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 103026 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 103026 ']' 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:35.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:47:35.955 15:00:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:47:35.955 [2024-07-22 15:00:54.682474] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:47:35.955 [2024-07-22 15:00:54.682560] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:35.955 [2024-07-22 15:00:54.822069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:47:35.955 [2024-07-22 15:00:54.867397] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:47:35.955 [2024-07-22 15:00:54.867448] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:47:35.955 [2024-07-22 15:00:54.867470] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:47:35.955 [2024-07-22 15:00:54.867474] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:47:35.955 [2024-07-22 15:00:54.867478] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:47:35.955 [2024-07-22 15:00:54.867851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:47:35.955 [2024-07-22 15:00:54.867710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:47:35.955 [2024-07-22 15:00:54.867851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:47:35.955 15:00:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:47:35.955 15:00:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:47:35.955 15:00:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:47:35.955 15:00:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:47:35.955 15:00:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:47:35.955 15:00:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:47:35.955 15:00:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:47:35.955 15:00:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:35.955 15:00:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:47:36.214 [2024-07-22 15:00:55.585197] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:36.214 15:00:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:36.214 15:00:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:47:36.214 15:00:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:36.214 15:00:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:47:36.214 Malloc0 00:47:36.214 15:00:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:36.214 15:00:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:47:36.214 15:00:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:36.214 15:00:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:47:36.214 15:00:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:36.214 15:00:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:47:36.214 15:00:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:36.214 15:00:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:47:36.215 15:00:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:36.215 15:00:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:47:36.215 15:00:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:36.215 15:00:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:47:36.215 [2024-07-22 15:00:55.655601] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:47:36.215 15:00:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:36.215 15:00:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:47:36.215 15:00:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:36.215 15:00:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:47:36.215 [2024-07-22 15:00:55.667510] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:47:36.215 15:00:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:36.215 15:00:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:47:36.215 15:00:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:36.215 15:00:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:47:36.215 Malloc1 00:47:36.215 15:00:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:36.215 15:00:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:47:36.215 15:00:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:36.215 15:00:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:47:36.215 15:00:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:36.215 15:00:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:47:36.215 15:00:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:36.215 15:00:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:47:36.215 15:00:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:36.215 15:00:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:47:36.215 15:00:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:36.215 15:00:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:47:36.215 15:00:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:36.215 15:00:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:47:36.215 15:00:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:36.215 15:00:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:47:36.215 15:00:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:36.215 15:00:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:47:36.215 15:00:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=103078 00:47:36.215 15:00:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:47:36.215 15:00:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 103078 /var/tmp/bdevperf.sock 00:47:36.215 15:00:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 103078 ']' 00:47:36.215 15:00:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:47:36.215 15:00:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:47:36.215 15:00:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:47:36.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:47:36.215 15:00:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:47:36.215 15:00:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:47:37.182 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:47:37.182 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:47:37.182 15:00:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:47:37.182 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:37.182 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:47:37.182 NVMe0n1 00:47:37.182 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:37.182 15:00:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:47:37.182 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:37.182 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:47:37.182 15:00:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:47:37.182 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:37.182 1 00:47:37.182 15:00:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:47:37.182 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:47:37.182 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:47:37.182 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:47:37.182 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:47:37.182 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:47:37.182 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:47:37.182 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:47:37.182 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:37.182 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:47:37.182 2024/07/22 15:00:56 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:47:37.182 request: 00:47:37.182 { 00:47:37.182 "method": "bdev_nvme_attach_controller", 00:47:37.182 "params": { 00:47:37.182 "name": "NVMe0", 00:47:37.182 "trtype": "tcp", 00:47:37.182 "traddr": "10.0.0.2", 00:47:37.182 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:47:37.182 "hostaddr": "10.0.0.2", 00:47:37.182 "hostsvcid": "60000", 00:47:37.182 "adrfam": "ipv4", 00:47:37.182 "trsvcid": "4420", 00:47:37.182 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:47:37.182 } 00:47:37.182 } 00:47:37.182 Got JSON-RPC error response 00:47:37.182 GoRPCClient: error on JSON-RPC call 00:47:37.182 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:47:37.182 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:47:37.182 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:47:37.182 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:47:37.182 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:47:37.182 15:00:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:47:37.182 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:47:37.182 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:47:37.182 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:47:37.182 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:47:37.182 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:47:37.182 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:47:37.182 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:47:37.182 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:37.182 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:47:37.182 2024/07/22 15:00:56 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:47:37.182 request: 00:47:37.182 { 00:47:37.182 "method": "bdev_nvme_attach_controller", 00:47:37.182 "params": { 00:47:37.182 "name": "NVMe0", 00:47:37.182 "trtype": "tcp", 00:47:37.182 "traddr": "10.0.0.2", 00:47:37.182 "hostaddr": "10.0.0.2", 00:47:37.182 "hostsvcid": "60000", 00:47:37.182 "adrfam": "ipv4", 00:47:37.182 "trsvcid": "4420", 00:47:37.182 "subnqn": "nqn.2016-06.io.spdk:cnode2" 00:47:37.182 } 00:47:37.182 } 00:47:37.182 Got JSON-RPC error response 00:47:37.182 GoRPCClient: error on JSON-RPC call 00:47:37.182 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:47:37.182 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:47:37.182 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:47:37.182 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:47:37.182 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:47:37.182 15:00:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:47:37.182 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:47:37.182 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:47:37.182 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:47:37.182 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:47:37.182 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:47:37.182 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:47:37.182 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:47:37.182 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:37.182 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:47:37.182 2024/07/22 15:00:56 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:47:37.182 request: 00:47:37.182 { 00:47:37.182 "method": "bdev_nvme_attach_controller", 00:47:37.182 "params": { 00:47:37.182 "name": "NVMe0", 00:47:37.182 "trtype": "tcp", 00:47:37.182 "traddr": "10.0.0.2", 00:47:37.182 "hostaddr": "10.0.0.2", 00:47:37.182 "hostsvcid": "60000", 00:47:37.182 "adrfam": "ipv4", 00:47:37.182 "trsvcid": "4420", 00:47:37.182 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:47:37.182 "multipath": "disable" 00:47:37.182 } 00:47:37.182 } 00:47:37.182 Got JSON-RPC error response 00:47:37.182 GoRPCClient: error on JSON-RPC call 00:47:37.183 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:47:37.183 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:47:37.183 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:47:37.183 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:47:37.183 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:47:37.183 15:00:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:47:37.183 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:47:37.183 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:47:37.183 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:47:37.183 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:47:37.183 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:47:37.183 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:47:37.183 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:47:37.183 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:37.183 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:47:37.183 2024/07/22 15:00:56 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:47:37.183 request: 00:47:37.183 { 00:47:37.183 "method": "bdev_nvme_attach_controller", 00:47:37.183 "params": { 00:47:37.183 "name": "NVMe0", 00:47:37.183 "trtype": "tcp", 00:47:37.183 "traddr": "10.0.0.2", 00:47:37.183 "hostaddr": "10.0.0.2", 00:47:37.183 "hostsvcid": "60000", 00:47:37.183 "adrfam": "ipv4", 00:47:37.183 "trsvcid": "4420", 00:47:37.183 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:47:37.183 "multipath": "failover" 00:47:37.183 } 00:47:37.183 } 00:47:37.183 Got JSON-RPC error response 00:47:37.183 GoRPCClient: error on JSON-RPC call 00:47:37.183 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:47:37.442 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:47:37.442 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:47:37.442 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:47:37.442 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:47:37.442 15:00:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:47:37.442 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:37.442 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:47:37.442 00:47:37.442 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:37.442 15:00:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:47:37.442 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:37.442 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:47:37.442 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:37.442 15:00:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:47:37.442 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:37.442 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:47:37.442 00:47:37.442 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:37.442 15:00:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:47:37.442 15:00:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:47:37.442 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:37.442 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:47:37.442 15:00:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:37.442 15:00:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:47:37.442 15:00:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:47:38.826 0 00:47:38.826 15:00:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:47:38.826 15:00:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:38.826 15:00:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:47:38.826 15:00:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:38.826 15:00:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 103078 00:47:38.826 15:00:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 103078 ']' 00:47:38.826 15:00:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 103078 00:47:38.826 15:00:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:47:38.826 15:00:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:47:38.826 15:00:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 103078 00:47:38.826 killing process with pid 103078 00:47:38.826 15:00:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:47:38.826 15:00:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:47:38.826 15:00:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 103078' 00:47:38.826 15:00:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 103078 00:47:38.826 15:00:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 103078 00:47:39.085 15:00:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:47:39.085 15:00:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:39.085 15:00:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:47:39.085 15:00:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:39.085 15:00:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:47:39.085 15:00:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:39.085 15:00:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:47:39.085 15:00:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:39.085 15:00:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:47:39.085 15:00:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:47:39.085 15:00:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:47:39.085 15:00:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:47:39.085 15:00:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # sort -u 00:47:39.085 15:00:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # cat 00:47:39.085 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:47:39.085 [2024-07-22 15:00:55.782362] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:47:39.085 [2024-07-22 15:00:55.782439] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103078 ] 00:47:39.085 [2024-07-22 15:00:55.919863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:39.085 [2024-07-22 15:00:55.998643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:39.085 [2024-07-22 15:00:56.953004] bdev.c:4580:bdev_name_add: *ERROR*: Bdev name b17ec3bf-347e-4ebc-a785-d61f6cac1a34 already exists 00:47:39.085 [2024-07-22 15:00:56.953078] bdev.c:7696:bdev_register: *ERROR*: Unable to add uuid:b17ec3bf-347e-4ebc-a785-d61f6cac1a34 alias for bdev NVMe1n1 00:47:39.085 [2024-07-22 15:00:56.953091] bdev_nvme.c:4314:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:47:39.085 Running I/O for 1 seconds... 00:47:39.085 00:47:39.085 Latency(us) 00:47:39.085 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:39.085 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:47:39.085 NVMe0n1 : 1.01 24621.23 96.18 0.00 0.00 5186.89 1931.74 9730.24 00:47:39.085 =================================================================================================================== 00:47:39.085 Total : 24621.23 96.18 0.00 0.00 5186.89 1931.74 9730.24 00:47:39.085 Received shutdown signal, test time was about 1.000000 seconds 00:47:39.085 00:47:39.085 Latency(us) 00:47:39.085 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:47:39.085 =================================================================================================================== 00:47:39.085 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:47:39.085 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:47:39.085 15:00:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1614 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:47:39.085 15:00:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:47:39.085 15:00:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:47:39.085 15:00:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:47:39.085 15:00:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:47:39.085 15:00:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:47:39.085 15:00:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:47:39.085 15:00:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:47:39.085 15:00:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:47:39.085 rmmod nvme_tcp 00:47:39.085 rmmod nvme_fabrics 00:47:39.085 rmmod nvme_keyring 00:47:39.085 15:00:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:47:39.085 15:00:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:47:39.085 15:00:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:47:39.085 15:00:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 103026 ']' 00:47:39.085 15:00:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 103026 00:47:39.085 15:00:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 103026 ']' 00:47:39.085 15:00:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 103026 00:47:39.085 15:00:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:47:39.085 15:00:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:47:39.085 15:00:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 103026 00:47:39.085 killing process with pid 103026 00:47:39.085 15:00:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:47:39.085 15:00:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:47:39.085 15:00:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 103026' 00:47:39.085 15:00:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 103026 00:47:39.085 15:00:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 103026 00:47:39.343 15:00:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:47:39.343 15:00:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:47:39.343 15:00:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:47:39.343 15:00:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:47:39.343 15:00:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:47:39.343 15:00:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:39.343 15:00:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:47:39.343 15:00:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:39.343 15:00:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:47:39.343 00:47:39.343 real 0m4.851s 00:47:39.343 user 0m15.038s 00:47:39.343 sys 0m1.151s 00:47:39.343 15:00:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:47:39.343 15:00:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:47:39.343 ************************************ 00:47:39.343 END TEST nvmf_multicontroller 00:47:39.343 ************************************ 00:47:39.609 15:00:59 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:47:39.609 15:00:59 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:47:39.609 15:00:59 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:47:39.609 15:00:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:39.609 ************************************ 00:47:39.609 START TEST nvmf_aer 00:47:39.609 ************************************ 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:47:39.609 * Looking for test storage... 00:47:39.609 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@432 -- # nvmf_veth_init 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:47:39.609 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:47:39.610 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:47:39.610 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:47:39.610 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:47:39.610 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:47:39.610 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:47:39.610 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:47:39.610 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:47:39.610 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:47:39.610 Cannot find device "nvmf_tgt_br" 00:47:39.874 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # true 00:47:39.874 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:47:39.874 Cannot find device "nvmf_tgt_br2" 00:47:39.874 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # true 00:47:39.874 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:47:39.874 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:47:39.874 Cannot find device "nvmf_tgt_br" 00:47:39.874 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # true 00:47:39.874 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:47:39.874 Cannot find device "nvmf_tgt_br2" 00:47:39.874 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # true 00:47:39.874 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:47:39.874 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:47:39.874 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:47:39.874 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:47:39.874 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # true 00:47:39.874 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:47:39.874 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:47:39.874 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # true 00:47:39.874 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:47:39.874 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:47:39.874 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:47:39.874 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:47:39.874 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:47:39.874 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:47:39.874 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:47:39.874 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:47:39.874 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:47:39.874 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:47:39.874 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:47:39.874 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:47:39.874 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:47:39.874 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:47:39.874 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:47:39.874 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:47:39.874 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:47:39.874 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:47:39.874 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:47:39.874 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:47:39.874 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:47:39.874 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:47:39.874 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:47:39.874 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:47:40.134 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:47:40.134 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:47:40.134 00:47:40.134 --- 10.0.0.2 ping statistics --- 00:47:40.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:40.134 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:47:40.134 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:47:40.134 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:47:40.134 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:47:40.134 00:47:40.134 --- 10.0.0.3 ping statistics --- 00:47:40.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:40.134 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:47:40.134 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:47:40.134 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:47:40.134 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:47:40.134 00:47:40.134 --- 10.0.0.1 ping statistics --- 00:47:40.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:40.134 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:47:40.134 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:47:40.134 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@433 -- # return 0 00:47:40.134 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:47:40.134 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:47:40.134 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:47:40.134 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:47:40.134 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:47:40.134 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:47:40.134 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:47:40.134 15:00:59 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:47:40.134 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:47:40.134 15:00:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@720 -- # xtrace_disable 00:47:40.134 15:00:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:47:40.134 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=103331 00:47:40.134 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 103331 00:47:40.134 15:00:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@827 -- # '[' -z 103331 ']' 00:47:40.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:40.134 15:00:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:40.134 15:00:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local max_retries=100 00:47:40.134 15:00:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:40.134 15:00:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # xtrace_disable 00:47:40.134 15:00:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:47:40.134 15:00:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:47:40.134 [2024-07-22 15:00:59.611212] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:47:40.134 [2024-07-22 15:00:59.611271] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:40.134 [2024-07-22 15:00:59.754064] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:47:40.394 [2024-07-22 15:00:59.803767] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:47:40.394 [2024-07-22 15:00:59.803819] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:47:40.394 [2024-07-22 15:00:59.803826] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:47:40.394 [2024-07-22 15:00:59.803831] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:47:40.394 [2024-07-22 15:00:59.803835] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:47:40.394 [2024-07-22 15:00:59.804051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:47:40.394 [2024-07-22 15:00:59.804270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:47:40.394 [2024-07-22 15:00:59.804393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:47:40.394 [2024-07-22 15:00:59.804397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:40.962 15:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:47:40.962 15:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@860 -- # return 0 00:47:40.962 15:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:47:40.962 15:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:47:40.962 15:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:47:40.962 15:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:47:40.962 15:01:00 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:47:40.962 15:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:40.962 15:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:47:40.962 [2024-07-22 15:01:00.500594] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:40.962 15:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:40.962 15:01:00 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:47:40.962 15:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:40.962 15:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:47:40.962 Malloc0 00:47:40.962 15:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:40.962 15:01:00 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:47:40.962 15:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:40.962 15:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:47:40.962 15:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:40.962 15:01:00 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:47:40.962 15:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:40.962 15:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:47:40.962 15:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:40.962 15:01:00 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:47:40.962 15:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:40.962 15:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:47:40.962 [2024-07-22 15:01:00.565839] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:47:40.962 15:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:40.962 15:01:00 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:47:40.962 15:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:40.962 15:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:47:40.962 [ 00:47:40.962 { 00:47:40.962 "allow_any_host": true, 00:47:40.962 "hosts": [], 00:47:40.962 "listen_addresses": [], 00:47:40.962 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:47:40.962 "subtype": "Discovery" 00:47:40.962 }, 00:47:40.962 { 00:47:40.962 "allow_any_host": true, 00:47:40.962 "hosts": [], 00:47:40.962 "listen_addresses": [ 00:47:40.962 { 00:47:40.962 "adrfam": "IPv4", 00:47:40.962 "traddr": "10.0.0.2", 00:47:40.962 "trsvcid": "4420", 00:47:40.962 "trtype": "TCP" 00:47:40.962 } 00:47:40.962 ], 00:47:40.962 "max_cntlid": 65519, 00:47:40.962 "max_namespaces": 2, 00:47:40.962 "min_cntlid": 1, 00:47:40.962 "model_number": "SPDK bdev Controller", 00:47:40.962 "namespaces": [ 00:47:40.962 { 00:47:40.962 "bdev_name": "Malloc0", 00:47:40.962 "name": "Malloc0", 00:47:40.962 "nguid": "CBC6A306273342D797517745832AC43D", 00:47:40.962 "nsid": 1, 00:47:40.962 "uuid": "cbc6a306-2733-42d7-9751-7745832ac43d" 00:47:40.962 } 00:47:40.962 ], 00:47:40.963 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:47:40.963 "serial_number": "SPDK00000000000001", 00:47:40.963 "subtype": "NVMe" 00:47:40.963 } 00:47:40.963 ] 00:47:40.963 15:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:40.963 15:01:00 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:47:40.963 15:01:00 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:47:40.963 15:01:00 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=103385 00:47:40.963 15:01:00 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:47:40.963 15:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # local i=0 00:47:40.963 15:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:47:40.963 15:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:47:40.963 15:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=1 00:47:40.963 15:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:47:40.963 15:01:00 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:47:41.220 15:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:47:41.220 15:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:47:41.220 15:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=2 00:47:41.220 15:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:47:41.220 15:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:47:41.220 15:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:47:41.220 15:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # return 0 00:47:41.220 15:01:00 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:47:41.220 15:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:41.220 15:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:47:41.220 Malloc1 00:47:41.220 15:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:41.220 15:01:00 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:47:41.220 15:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:41.220 15:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:47:41.221 15:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:41.221 15:01:00 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:47:41.221 15:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:41.221 15:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:47:41.484 Asynchronous Event Request test 00:47:41.484 Attaching to 10.0.0.2 00:47:41.484 Attached to 10.0.0.2 00:47:41.484 Registering asynchronous event callbacks... 00:47:41.484 Starting namespace attribute notice tests for all controllers... 00:47:41.484 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:47:41.484 aer_cb - Changed Namespace 00:47:41.484 Cleaning up... 00:47:41.484 [ 00:47:41.484 { 00:47:41.484 "allow_any_host": true, 00:47:41.484 "hosts": [], 00:47:41.484 "listen_addresses": [], 00:47:41.484 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:47:41.484 "subtype": "Discovery" 00:47:41.484 }, 00:47:41.484 { 00:47:41.484 "allow_any_host": true, 00:47:41.484 "hosts": [], 00:47:41.484 "listen_addresses": [ 00:47:41.484 { 00:47:41.484 "adrfam": "IPv4", 00:47:41.484 "traddr": "10.0.0.2", 00:47:41.484 "trsvcid": "4420", 00:47:41.484 "trtype": "TCP" 00:47:41.484 } 00:47:41.484 ], 00:47:41.484 "max_cntlid": 65519, 00:47:41.484 "max_namespaces": 2, 00:47:41.484 "min_cntlid": 1, 00:47:41.484 "model_number": "SPDK bdev Controller", 00:47:41.484 "namespaces": [ 00:47:41.484 { 00:47:41.484 "bdev_name": "Malloc0", 00:47:41.484 "name": "Malloc0", 00:47:41.484 "nguid": "CBC6A306273342D797517745832AC43D", 00:47:41.484 "nsid": 1, 00:47:41.484 "uuid": "cbc6a306-2733-42d7-9751-7745832ac43d" 00:47:41.484 }, 00:47:41.484 { 00:47:41.484 "bdev_name": "Malloc1", 00:47:41.484 "name": "Malloc1", 00:47:41.484 "nguid": "F2CB6B77E09140EDBDC17E98AF5EBB9C", 00:47:41.484 "nsid": 2, 00:47:41.484 "uuid": "f2cb6b77-e091-40ed-bdc1-7e98af5ebb9c" 00:47:41.484 } 00:47:41.484 ], 00:47:41.484 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:47:41.484 "serial_number": "SPDK00000000000001", 00:47:41.484 "subtype": "NVMe" 00:47:41.484 } 00:47:41.484 ] 00:47:41.484 15:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:41.484 15:01:00 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 103385 00:47:41.484 15:01:00 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:47:41.484 15:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:41.484 15:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:47:41.484 15:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:41.484 15:01:00 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:47:41.484 15:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:41.484 15:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:47:41.484 15:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:41.484 15:01:00 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:47:41.484 15:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:41.484 15:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:47:41.484 15:01:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:41.484 15:01:00 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:47:41.484 15:01:00 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:47:41.484 15:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:47:41.484 15:01:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:47:41.484 15:01:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:47:41.484 15:01:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:47:41.484 15:01:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:47:41.484 15:01:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:47:41.484 rmmod nvme_tcp 00:47:41.484 rmmod nvme_fabrics 00:47:41.484 rmmod nvme_keyring 00:47:41.484 15:01:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:47:41.484 15:01:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:47:41.484 15:01:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:47:41.484 15:01:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 103331 ']' 00:47:41.484 15:01:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 103331 00:47:41.484 15:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@946 -- # '[' -z 103331 ']' 00:47:41.484 15:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@950 -- # kill -0 103331 00:47:41.484 15:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # uname 00:47:41.484 15:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:47:41.484 15:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 103331 00:47:41.749 killing process with pid 103331 00:47:41.749 15:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:47:41.749 15:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:47:41.749 15:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@964 -- # echo 'killing process with pid 103331' 00:47:41.749 15:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # kill 103331 00:47:41.749 15:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@970 -- # wait 103331 00:47:41.749 15:01:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:47:41.749 15:01:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:47:41.749 15:01:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:47:41.749 15:01:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:47:41.749 15:01:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:47:41.749 15:01:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:41.749 15:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:47:41.749 15:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:41.749 15:01:01 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:47:41.749 00:47:41.749 real 0m2.331s 00:47:41.749 user 0m6.187s 00:47:41.749 sys 0m0.670s 00:47:41.749 15:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # xtrace_disable 00:47:41.749 15:01:01 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:47:41.749 ************************************ 00:47:41.749 END TEST nvmf_aer 00:47:41.749 ************************************ 00:47:42.010 15:01:01 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:47:42.010 15:01:01 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:47:42.010 15:01:01 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:47:42.010 15:01:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:42.010 ************************************ 00:47:42.010 START TEST nvmf_async_init 00:47:42.010 ************************************ 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:47:42.010 * Looking for test storage... 00:47:42.010 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=4d97303de7a34f778decc90feb364934 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:47:42.010 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:47:42.011 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:47:42.011 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@432 -- # nvmf_veth_init 00:47:42.011 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:47:42.011 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:47:42.011 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:47:42.011 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:47:42.011 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:47:42.011 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:47:42.011 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:47:42.011 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:47:42.011 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:47:42.011 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:47:42.011 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:47:42.011 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:47:42.011 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:47:42.011 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:47:42.275 Cannot find device "nvmf_tgt_br" 00:47:42.275 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # true 00:47:42.275 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:47:42.275 Cannot find device "nvmf_tgt_br2" 00:47:42.275 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # true 00:47:42.275 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:47:42.275 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:47:42.275 Cannot find device "nvmf_tgt_br" 00:47:42.275 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # true 00:47:42.275 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:47:42.275 Cannot find device "nvmf_tgt_br2" 00:47:42.275 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # true 00:47:42.275 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:47:42.275 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:47:42.275 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:47:42.275 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:47:42.275 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # true 00:47:42.275 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:47:42.275 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:47:42.275 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # true 00:47:42.275 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:47:42.275 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:47:42.275 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:47:42.275 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:47:42.275 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:47:42.275 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:47:42.275 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:47:42.275 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:47:42.275 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:47:42.275 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:47:42.275 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:47:42.275 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:47:42.275 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:47:42.275 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:47:42.275 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:47:42.275 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:47:42.276 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:47:42.276 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:47:42.538 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:47:42.538 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:47:42.538 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:47:42.538 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:47:42.538 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:47:42.538 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:47:42.538 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:47:42.538 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:47:42.538 00:47:42.538 --- 10.0.0.2 ping statistics --- 00:47:42.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:42.538 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:47:42.538 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:47:42.538 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:47:42.538 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.094 ms 00:47:42.538 00:47:42.538 --- 10.0.0.3 ping statistics --- 00:47:42.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:42.538 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:47:42.538 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:47:42.538 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:47:42.538 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:47:42.538 00:47:42.538 --- 10.0.0.1 ping statistics --- 00:47:42.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:42.538 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:47:42.538 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:47:42.538 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@433 -- # return 0 00:47:42.538 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:47:42.538 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:47:42.538 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:47:42.538 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:47:42.538 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:47:42.538 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:47:42.538 15:01:01 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:47:42.538 15:01:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:47:42.538 15:01:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:47:42.538 15:01:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@720 -- # xtrace_disable 00:47:42.538 15:01:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:47:42.538 15:01:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=103553 00:47:42.538 15:01:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:47:42.538 15:01:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 103553 00:47:42.538 15:01:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@827 -- # '[' -z 103553 ']' 00:47:42.538 15:01:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:42.538 15:01:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:47:42.538 15:01:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:42.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:42.538 15:01:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:47:42.538 15:01:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:47:42.538 [2024-07-22 15:01:02.071174] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:47:42.538 [2024-07-22 15:01:02.071240] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:42.796 [2024-07-22 15:01:02.207428] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:42.797 [2024-07-22 15:01:02.255236] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:47:42.797 [2024-07-22 15:01:02.255282] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:47:42.797 [2024-07-22 15:01:02.255288] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:47:42.797 [2024-07-22 15:01:02.255292] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:47:42.797 [2024-07-22 15:01:02.255296] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:47:42.797 [2024-07-22 15:01:02.255312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:43.368 15:01:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:47:43.368 15:01:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@860 -- # return 0 00:47:43.368 15:01:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:47:43.368 15:01:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:47:43.368 15:01:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:47:43.368 15:01:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:47:43.368 15:01:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:47:43.368 15:01:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:43.368 15:01:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:47:43.368 [2024-07-22 15:01:02.962293] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:43.368 15:01:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:43.368 15:01:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:47:43.368 15:01:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:43.368 15:01:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:47:43.368 null0 00:47:43.368 15:01:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:43.368 15:01:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:47:43.368 15:01:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:43.368 15:01:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:47:43.368 15:01:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:43.368 15:01:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:47:43.368 15:01:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:43.368 15:01:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:47:43.626 15:01:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:43.626 15:01:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 4d97303de7a34f778decc90feb364934 00:47:43.626 15:01:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:43.626 15:01:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:47:43.626 15:01:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:43.626 15:01:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:47:43.626 15:01:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:43.626 15:01:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:47:43.626 [2024-07-22 15:01:03.022240] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:47:43.626 15:01:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:43.626 15:01:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:47:43.626 15:01:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:43.626 15:01:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:47:43.885 nvme0n1 00:47:43.885 15:01:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:43.885 15:01:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:47:43.885 15:01:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:43.885 15:01:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:47:43.885 [ 00:47:43.885 { 00:47:43.885 "aliases": [ 00:47:43.885 "4d97303d-e7a3-4f77-8dec-c90feb364934" 00:47:43.885 ], 00:47:43.885 "assigned_rate_limits": { 00:47:43.885 "r_mbytes_per_sec": 0, 00:47:43.885 "rw_ios_per_sec": 0, 00:47:43.885 "rw_mbytes_per_sec": 0, 00:47:43.885 "w_mbytes_per_sec": 0 00:47:43.885 }, 00:47:43.885 "block_size": 512, 00:47:43.885 "claimed": false, 00:47:43.885 "driver_specific": { 00:47:43.885 "mp_policy": "active_passive", 00:47:43.885 "nvme": [ 00:47:43.885 { 00:47:43.885 "ctrlr_data": { 00:47:43.885 "ana_reporting": false, 00:47:43.885 "cntlid": 1, 00:47:43.885 "firmware_revision": "24.05.1", 00:47:43.885 "model_number": "SPDK bdev Controller", 00:47:43.885 "multi_ctrlr": true, 00:47:43.885 "oacs": { 00:47:43.885 "firmware": 0, 00:47:43.885 "format": 0, 00:47:43.885 "ns_manage": 0, 00:47:43.885 "security": 0 00:47:43.885 }, 00:47:43.885 "serial_number": "00000000000000000000", 00:47:43.885 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:43.885 "vendor_id": "0x8086" 00:47:43.885 }, 00:47:43.885 "ns_data": { 00:47:43.885 "can_share": true, 00:47:43.885 "id": 1 00:47:43.885 }, 00:47:43.885 "trid": { 00:47:43.885 "adrfam": "IPv4", 00:47:43.885 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:43.885 "traddr": "10.0.0.2", 00:47:43.885 "trsvcid": "4420", 00:47:43.885 "trtype": "TCP" 00:47:43.885 }, 00:47:43.885 "vs": { 00:47:43.885 "nvme_version": "1.3" 00:47:43.885 } 00:47:43.885 } 00:47:43.885 ] 00:47:43.885 }, 00:47:43.885 "memory_domains": [ 00:47:43.885 { 00:47:43.885 "dma_device_id": "system", 00:47:43.885 "dma_device_type": 1 00:47:43.885 } 00:47:43.885 ], 00:47:43.885 "name": "nvme0n1", 00:47:43.885 "num_blocks": 2097152, 00:47:43.885 "product_name": "NVMe disk", 00:47:43.885 "supported_io_types": { 00:47:43.885 "abort": true, 00:47:43.885 "compare": true, 00:47:43.885 "compare_and_write": true, 00:47:43.885 "flush": true, 00:47:43.885 "nvme_admin": true, 00:47:43.885 "nvme_io": true, 00:47:43.885 "read": true, 00:47:43.885 "reset": true, 00:47:43.885 "unmap": false, 00:47:43.885 "write": true, 00:47:43.885 "write_zeroes": true 00:47:43.885 }, 00:47:43.885 "uuid": "4d97303d-e7a3-4f77-8dec-c90feb364934", 00:47:43.885 "zoned": false 00:47:43.885 } 00:47:43.885 ] 00:47:43.885 15:01:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:43.885 15:01:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:47:43.885 15:01:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:43.885 15:01:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:47:43.885 [2024-07-22 15:01:03.302079] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:47:43.885 [2024-07-22 15:01:03.302158] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16cc340 (9): Bad file descriptor 00:47:43.885 [2024-07-22 15:01:03.443782] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:47:43.885 15:01:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:43.885 15:01:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:47:43.885 15:01:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:43.885 15:01:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:47:43.885 [ 00:47:43.885 { 00:47:43.885 "aliases": [ 00:47:43.885 "4d97303d-e7a3-4f77-8dec-c90feb364934" 00:47:43.885 ], 00:47:43.885 "assigned_rate_limits": { 00:47:43.885 "r_mbytes_per_sec": 0, 00:47:43.885 "rw_ios_per_sec": 0, 00:47:43.885 "rw_mbytes_per_sec": 0, 00:47:43.885 "w_mbytes_per_sec": 0 00:47:43.885 }, 00:47:43.885 "block_size": 512, 00:47:43.885 "claimed": false, 00:47:43.885 "driver_specific": { 00:47:43.885 "mp_policy": "active_passive", 00:47:43.885 "nvme": [ 00:47:43.885 { 00:47:43.885 "ctrlr_data": { 00:47:43.885 "ana_reporting": false, 00:47:43.885 "cntlid": 2, 00:47:43.885 "firmware_revision": "24.05.1", 00:47:43.885 "model_number": "SPDK bdev Controller", 00:47:43.885 "multi_ctrlr": true, 00:47:43.885 "oacs": { 00:47:43.885 "firmware": 0, 00:47:43.885 "format": 0, 00:47:43.885 "ns_manage": 0, 00:47:43.885 "security": 0 00:47:43.885 }, 00:47:43.885 "serial_number": "00000000000000000000", 00:47:43.885 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:43.885 "vendor_id": "0x8086" 00:47:43.885 }, 00:47:43.885 "ns_data": { 00:47:43.885 "can_share": true, 00:47:43.885 "id": 1 00:47:43.885 }, 00:47:43.885 "trid": { 00:47:43.885 "adrfam": "IPv4", 00:47:43.885 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:43.885 "traddr": "10.0.0.2", 00:47:43.885 "trsvcid": "4420", 00:47:43.885 "trtype": "TCP" 00:47:43.885 }, 00:47:43.885 "vs": { 00:47:43.885 "nvme_version": "1.3" 00:47:43.885 } 00:47:43.885 } 00:47:43.885 ] 00:47:43.885 }, 00:47:43.885 "memory_domains": [ 00:47:43.885 { 00:47:43.885 "dma_device_id": "system", 00:47:43.885 "dma_device_type": 1 00:47:43.885 } 00:47:43.885 ], 00:47:43.885 "name": "nvme0n1", 00:47:43.885 "num_blocks": 2097152, 00:47:43.885 "product_name": "NVMe disk", 00:47:43.885 "supported_io_types": { 00:47:43.885 "abort": true, 00:47:43.885 "compare": true, 00:47:43.885 "compare_and_write": true, 00:47:43.885 "flush": true, 00:47:43.885 "nvme_admin": true, 00:47:43.885 "nvme_io": true, 00:47:43.885 "read": true, 00:47:43.885 "reset": true, 00:47:43.885 "unmap": false, 00:47:43.885 "write": true, 00:47:43.885 "write_zeroes": true 00:47:43.885 }, 00:47:43.885 "uuid": "4d97303d-e7a3-4f77-8dec-c90feb364934", 00:47:43.885 "zoned": false 00:47:43.885 } 00:47:43.885 ] 00:47:43.885 15:01:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:43.885 15:01:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:43.885 15:01:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:43.885 15:01:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:47:43.885 15:01:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:43.885 15:01:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:47:43.885 15:01:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.8KAHrSLzg0 00:47:43.886 15:01:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:47:43.886 15:01:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.8KAHrSLzg0 00:47:43.886 15:01:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:47:43.886 15:01:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:43.886 15:01:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:47:44.144 15:01:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:44.144 15:01:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:47:44.144 15:01:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:44.144 15:01:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:47:44.144 [2024-07-22 15:01:03.525811] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:47:44.144 [2024-07-22 15:01:03.525939] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:47:44.144 15:01:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:44.144 15:01:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.8KAHrSLzg0 00:47:44.144 15:01:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:44.144 15:01:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:47:44.144 [2024-07-22 15:01:03.537788] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:47:44.144 15:01:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:44.144 15:01:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.8KAHrSLzg0 00:47:44.144 15:01:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:44.144 15:01:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:47:44.144 [2024-07-22 15:01:03.549764] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:47:44.144 [2024-07-22 15:01:03.549814] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:47:44.144 nvme0n1 00:47:44.144 15:01:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:44.144 15:01:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:47:44.144 15:01:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:44.144 15:01:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:47:44.144 [ 00:47:44.144 { 00:47:44.144 "aliases": [ 00:47:44.144 "4d97303d-e7a3-4f77-8dec-c90feb364934" 00:47:44.144 ], 00:47:44.144 "assigned_rate_limits": { 00:47:44.144 "r_mbytes_per_sec": 0, 00:47:44.144 "rw_ios_per_sec": 0, 00:47:44.144 "rw_mbytes_per_sec": 0, 00:47:44.144 "w_mbytes_per_sec": 0 00:47:44.144 }, 00:47:44.144 "block_size": 512, 00:47:44.144 "claimed": false, 00:47:44.144 "driver_specific": { 00:47:44.144 "mp_policy": "active_passive", 00:47:44.144 "nvme": [ 00:47:44.144 { 00:47:44.144 "ctrlr_data": { 00:47:44.144 "ana_reporting": false, 00:47:44.144 "cntlid": 3, 00:47:44.144 "firmware_revision": "24.05.1", 00:47:44.144 "model_number": "SPDK bdev Controller", 00:47:44.144 "multi_ctrlr": true, 00:47:44.144 "oacs": { 00:47:44.144 "firmware": 0, 00:47:44.144 "format": 0, 00:47:44.144 "ns_manage": 0, 00:47:44.144 "security": 0 00:47:44.144 }, 00:47:44.144 "serial_number": "00000000000000000000", 00:47:44.144 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:44.144 "vendor_id": "0x8086" 00:47:44.144 }, 00:47:44.144 "ns_data": { 00:47:44.144 "can_share": true, 00:47:44.144 "id": 1 00:47:44.144 }, 00:47:44.144 "trid": { 00:47:44.144 "adrfam": "IPv4", 00:47:44.144 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:47:44.144 "traddr": "10.0.0.2", 00:47:44.144 "trsvcid": "4421", 00:47:44.144 "trtype": "TCP" 00:47:44.144 }, 00:47:44.144 "vs": { 00:47:44.144 "nvme_version": "1.3" 00:47:44.144 } 00:47:44.144 } 00:47:44.144 ] 00:47:44.144 }, 00:47:44.144 "memory_domains": [ 00:47:44.144 { 00:47:44.144 "dma_device_id": "system", 00:47:44.144 "dma_device_type": 1 00:47:44.144 } 00:47:44.144 ], 00:47:44.144 "name": "nvme0n1", 00:47:44.144 "num_blocks": 2097152, 00:47:44.144 "product_name": "NVMe disk", 00:47:44.144 "supported_io_types": { 00:47:44.144 "abort": true, 00:47:44.144 "compare": true, 00:47:44.144 "compare_and_write": true, 00:47:44.144 "flush": true, 00:47:44.144 "nvme_admin": true, 00:47:44.144 "nvme_io": true, 00:47:44.144 "read": true, 00:47:44.144 "reset": true, 00:47:44.144 "unmap": false, 00:47:44.144 "write": true, 00:47:44.144 "write_zeroes": true 00:47:44.144 }, 00:47:44.144 "uuid": "4d97303d-e7a3-4f77-8dec-c90feb364934", 00:47:44.144 "zoned": false 00:47:44.144 } 00:47:44.144 ] 00:47:44.144 15:01:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:44.144 15:01:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:47:44.144 15:01:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:44.144 15:01:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:47:44.144 15:01:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:44.144 15:01:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.8KAHrSLzg0 00:47:44.144 15:01:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:47:44.144 15:01:03 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:47:44.144 15:01:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:47:44.144 15:01:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:47:44.144 15:01:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:47:44.145 15:01:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:47:44.145 15:01:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:47:44.145 15:01:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:47:44.145 rmmod nvme_tcp 00:47:44.145 rmmod nvme_fabrics 00:47:44.404 rmmod nvme_keyring 00:47:44.404 15:01:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:47:44.404 15:01:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:47:44.404 15:01:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:47:44.404 15:01:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 103553 ']' 00:47:44.404 15:01:03 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 103553 00:47:44.404 15:01:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@946 -- # '[' -z 103553 ']' 00:47:44.404 15:01:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@950 -- # kill -0 103553 00:47:44.404 15:01:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # uname 00:47:44.404 15:01:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:47:44.404 15:01:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 103553 00:47:44.404 killing process with pid 103553 00:47:44.404 15:01:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:47:44.404 15:01:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:47:44.404 15:01:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 103553' 00:47:44.404 15:01:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # kill 103553 00:47:44.404 [2024-07-22 15:01:03.852370] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:47:44.404 [2024-07-22 15:01:03.852406] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:47:44.404 15:01:03 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@970 -- # wait 103553 00:47:44.404 15:01:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:47:44.404 15:01:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:47:44.404 15:01:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:47:44.404 15:01:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:47:44.404 15:01:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:47:44.404 15:01:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:44.404 15:01:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:47:44.404 15:01:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:44.666 15:01:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:47:44.666 00:47:44.666 real 0m2.649s 00:47:44.666 user 0m2.341s 00:47:44.666 sys 0m0.688s 00:47:44.666 ************************************ 00:47:44.666 END TEST nvmf_async_init 00:47:44.666 ************************************ 00:47:44.666 15:01:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:47:44.666 15:01:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:47:44.666 15:01:04 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:47:44.666 15:01:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:47:44.666 15:01:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:47:44.666 15:01:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:44.666 ************************************ 00:47:44.666 START TEST dma 00:47:44.666 ************************************ 00:47:44.666 15:01:04 nvmf_tcp.dma -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:47:44.666 * Looking for test storage... 00:47:44.666 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:47:44.666 15:01:04 nvmf_tcp.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:47:44.666 15:01:04 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:47:44.666 15:01:04 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:44.666 15:01:04 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:44.666 15:01:04 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:44.666 15:01:04 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:44.667 15:01:04 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:44.667 15:01:04 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:44.667 15:01:04 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:44.667 15:01:04 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:44.667 15:01:04 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:44.667 15:01:04 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:44.667 15:01:04 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:47:44.667 15:01:04 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:47:44.667 15:01:04 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:44.667 15:01:04 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:44.667 15:01:04 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:47:44.667 15:01:04 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:44.667 15:01:04 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:47:44.667 15:01:04 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:44.667 15:01:04 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:44.667 15:01:04 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:44.667 15:01:04 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:44.667 15:01:04 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:44.667 15:01:04 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:44.667 15:01:04 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:47:44.667 15:01:04 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:44.667 15:01:04 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:47:44.667 15:01:04 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:47:44.667 15:01:04 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:47:44.667 15:01:04 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:44.667 15:01:04 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:44.667 15:01:04 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:44.667 15:01:04 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:47:44.667 15:01:04 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:47:44.667 15:01:04 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:47:44.667 15:01:04 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:47:44.667 15:01:04 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:47:44.667 00:47:44.667 real 0m0.158s 00:47:44.667 user 0m0.080s 00:47:44.667 sys 0m0.086s 00:47:44.667 15:01:04 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:47:44.667 15:01:04 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:47:44.667 ************************************ 00:47:44.667 END TEST dma 00:47:44.667 ************************************ 00:47:44.926 15:01:04 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:47:44.926 15:01:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:47:44.926 15:01:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:47:44.926 15:01:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:44.926 ************************************ 00:47:44.926 START TEST nvmf_identify 00:47:44.926 ************************************ 00:47:44.926 15:01:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:47:44.926 * Looking for test storage... 00:47:44.926 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:47:44.926 15:01:04 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:47:44.926 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:47:44.926 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:44.926 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:44.926 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:44.926 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:44.926 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:44.926 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:44.926 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:44.926 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:44.926 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:44.926 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:44.926 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:47:44.926 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:47:44.926 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:44.926 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:44.926 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:47:44.926 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:44.926 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:47:44.926 15:01:04 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:44.926 15:01:04 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:44.926 15:01:04 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:44.927 15:01:04 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:44.927 15:01:04 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:44.927 15:01:04 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:44.927 15:01:04 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:47:44.927 15:01:04 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:44.927 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:47:44.927 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:47:44.927 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:47:44.927 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:44.927 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:44.927 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:44.927 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:47:44.927 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:47:44.927 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:47:44.927 15:01:04 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:47:44.927 15:01:04 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:47:44.927 15:01:04 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:47:44.927 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:47:44.927 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:47:44.927 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:47:44.927 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:47:44.927 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:47:44.927 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:44.927 15:01:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:47:44.927 15:01:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:44.927 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:47:44.927 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:47:44.927 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:47:44.927 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:47:44.927 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:47:44.927 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:47:44.927 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:47:44.927 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:47:44.927 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:47:44.927 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:47:44.927 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:47:44.927 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:47:44.927 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:47:44.927 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:47:44.927 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:47:44.927 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:47:44.927 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:47:44.927 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:47:44.927 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:47:44.927 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:47:44.927 Cannot find device "nvmf_tgt_br" 00:47:45.186 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:47:45.186 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:47:45.186 Cannot find device "nvmf_tgt_br2" 00:47:45.186 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:47:45.186 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:47:45.186 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:47:45.186 Cannot find device "nvmf_tgt_br" 00:47:45.186 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:47:45.186 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:47:45.186 Cannot find device "nvmf_tgt_br2" 00:47:45.186 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:47:45.186 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:47:45.186 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:47:45.186 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:47:45.186 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:47:45.186 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:47:45.186 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:47:45.186 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:47:45.186 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:47:45.186 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:47:45.186 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:47:45.186 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:47:45.186 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:47:45.186 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:47:45.186 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:47:45.186 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:47:45.186 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:47:45.186 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:47:45.186 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:47:45.186 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:47:45.186 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:47:45.186 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:47:45.186 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:47:45.186 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:47:45.186 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:47:45.186 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:47:45.186 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:47:45.186 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:47:45.186 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:47:45.186 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:47:45.186 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:47:45.186 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:47:45.186 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:47:45.186 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:47:45.186 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:47:45.186 00:47:45.186 --- 10.0.0.2 ping statistics --- 00:47:45.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:45.186 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:47:45.186 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:47:45.186 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:47:45.186 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:47:45.186 00:47:45.186 --- 10.0.0.3 ping statistics --- 00:47:45.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:45.186 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:47:45.186 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:47:45.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:47:45.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:47:45.444 00:47:45.444 --- 10.0.0.1 ping statistics --- 00:47:45.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:45.444 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:47:45.444 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:47:45.444 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:47:45.444 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:47:45.444 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:47:45.444 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:47:45.444 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:47:45.444 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:47:45.444 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:47:45.444 15:01:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:47:45.444 15:01:04 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:47:45.444 15:01:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:47:45.444 15:01:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:47:45.444 15:01:04 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=103822 00:47:45.444 15:01:04 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:47:45.444 15:01:04 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:47:45.444 15:01:04 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 103822 00:47:45.444 15:01:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 103822 ']' 00:47:45.444 15:01:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:45.444 15:01:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:47:45.444 15:01:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:45.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:45.444 15:01:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:47:45.444 15:01:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:47:45.444 [2024-07-22 15:01:04.913041] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:47:45.444 [2024-07-22 15:01:04.913171] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:45.444 [2024-07-22 15:01:05.053280] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:47:45.702 [2024-07-22 15:01:05.103522] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:47:45.702 [2024-07-22 15:01:05.103661] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:47:45.703 [2024-07-22 15:01:05.103678] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:47:45.703 [2024-07-22 15:01:05.103698] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:47:45.703 [2024-07-22 15:01:05.103703] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:47:45.703 [2024-07-22 15:01:05.103985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:47:45.703 [2024-07-22 15:01:05.104096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:47:45.703 [2024-07-22 15:01:05.104280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:45.703 [2024-07-22 15:01:05.104284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:47:46.271 15:01:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:47:46.271 15:01:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:47:46.271 15:01:05 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:47:46.271 15:01:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:46.271 15:01:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:47:46.271 [2024-07-22 15:01:05.756470] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:46.271 15:01:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:46.271 15:01:05 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:47:46.271 15:01:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:47:46.271 15:01:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:47:46.271 15:01:05 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:47:46.271 15:01:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:46.271 15:01:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:47:46.271 Malloc0 00:47:46.271 15:01:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:46.271 15:01:05 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:47:46.271 15:01:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:46.271 15:01:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:47:46.271 15:01:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:46.271 15:01:05 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:47:46.271 15:01:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:46.271 15:01:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:47:46.271 15:01:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:46.271 15:01:05 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:47:46.271 15:01:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:46.271 15:01:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:47:46.271 [2024-07-22 15:01:05.862991] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:47:46.271 15:01:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:46.271 15:01:05 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:47:46.271 15:01:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:46.271 15:01:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:47:46.271 15:01:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:46.271 15:01:05 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:47:46.271 15:01:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:46.271 15:01:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:47:46.271 [ 00:47:46.271 { 00:47:46.271 "allow_any_host": true, 00:47:46.271 "hosts": [], 00:47:46.271 "listen_addresses": [ 00:47:46.271 { 00:47:46.271 "adrfam": "IPv4", 00:47:46.271 "traddr": "10.0.0.2", 00:47:46.271 "trsvcid": "4420", 00:47:46.271 "trtype": "TCP" 00:47:46.271 } 00:47:46.271 ], 00:47:46.271 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:47:46.271 "subtype": "Discovery" 00:47:46.271 }, 00:47:46.271 { 00:47:46.271 "allow_any_host": true, 00:47:46.271 "hosts": [], 00:47:46.271 "listen_addresses": [ 00:47:46.271 { 00:47:46.271 "adrfam": "IPv4", 00:47:46.272 "traddr": "10.0.0.2", 00:47:46.272 "trsvcid": "4420", 00:47:46.272 "trtype": "TCP" 00:47:46.272 } 00:47:46.272 ], 00:47:46.272 "max_cntlid": 65519, 00:47:46.272 "max_namespaces": 32, 00:47:46.272 "min_cntlid": 1, 00:47:46.272 "model_number": "SPDK bdev Controller", 00:47:46.272 "namespaces": [ 00:47:46.272 { 00:47:46.272 "bdev_name": "Malloc0", 00:47:46.272 "eui64": "ABCDEF0123456789", 00:47:46.272 "name": "Malloc0", 00:47:46.272 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:47:46.272 "nsid": 1, 00:47:46.272 "uuid": "72d57f37-b4c6-4402-88b5-0e99555d8e32" 00:47:46.272 } 00:47:46.272 ], 00:47:46.272 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:47:46.272 "serial_number": "SPDK00000000000001", 00:47:46.272 "subtype": "NVMe" 00:47:46.272 } 00:47:46.272 ] 00:47:46.272 15:01:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:46.272 15:01:05 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:47:46.534 [2024-07-22 15:01:05.919262] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:47:46.534 [2024-07-22 15:01:05.919312] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103874 ] 00:47:46.534 [2024-07-22 15:01:06.052218] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:47:46.534 [2024-07-22 15:01:06.052272] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:47:46.534 [2024-07-22 15:01:06.052277] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:47:46.534 [2024-07-22 15:01:06.052289] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:47:46.534 [2024-07-22 15:01:06.052298] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:47:46.534 [2024-07-22 15:01:06.052421] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:47:46.534 [2024-07-22 15:01:06.052460] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1b50970 0 00:47:46.534 [2024-07-22 15:01:06.058683] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:47:46.534 [2024-07-22 15:01:06.058707] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:47:46.534 [2024-07-22 15:01:06.058711] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:47:46.534 [2024-07-22 15:01:06.058713] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:47:46.534 [2024-07-22 15:01:06.058756] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.534 [2024-07-22 15:01:06.058763] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.534 [2024-07-22 15:01:06.058768] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b50970) 00:47:46.534 [2024-07-22 15:01:06.058782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:47:46.534 [2024-07-22 15:01:06.058806] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b891d0, cid 0, qid 0 00:47:46.534 [2024-07-22 15:01:06.069690] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.534 [2024-07-22 15:01:06.069708] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.534 [2024-07-22 15:01:06.069711] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.534 [2024-07-22 15:01:06.069715] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b891d0) on tqpair=0x1b50970 00:47:46.534 [2024-07-22 15:01:06.069725] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:47:46.534 [2024-07-22 15:01:06.069731] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:47:46.534 [2024-07-22 15:01:06.069736] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:47:46.534 [2024-07-22 15:01:06.069752] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.534 [2024-07-22 15:01:06.069756] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.534 [2024-07-22 15:01:06.069759] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b50970) 00:47:46.534 [2024-07-22 15:01:06.069768] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.534 [2024-07-22 15:01:06.069792] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b891d0, cid 0, qid 0 00:47:46.534 [2024-07-22 15:01:06.069859] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.534 [2024-07-22 15:01:06.069865] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.534 [2024-07-22 15:01:06.069867] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.534 [2024-07-22 15:01:06.069871] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b891d0) on tqpair=0x1b50970 00:47:46.534 [2024-07-22 15:01:06.069876] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:47:46.534 [2024-07-22 15:01:06.069881] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:47:46.534 [2024-07-22 15:01:06.069887] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.534 [2024-07-22 15:01:06.069890] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.534 [2024-07-22 15:01:06.069893] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b50970) 00:47:46.534 [2024-07-22 15:01:06.069899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.534 [2024-07-22 15:01:06.069912] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b891d0, cid 0, qid 0 00:47:46.534 [2024-07-22 15:01:06.069978] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.534 [2024-07-22 15:01:06.069983] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.534 [2024-07-22 15:01:06.069986] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.534 [2024-07-22 15:01:06.069989] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b891d0) on tqpair=0x1b50970 00:47:46.534 [2024-07-22 15:01:06.069996] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:47:46.534 [2024-07-22 15:01:06.070002] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:47:46.534 [2024-07-22 15:01:06.070008] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.534 [2024-07-22 15:01:06.070011] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.534 [2024-07-22 15:01:06.070014] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b50970) 00:47:46.534 [2024-07-22 15:01:06.070020] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.534 [2024-07-22 15:01:06.070033] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b891d0, cid 0, qid 0 00:47:46.534 [2024-07-22 15:01:06.070078] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.534 [2024-07-22 15:01:06.070084] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.534 [2024-07-22 15:01:06.070086] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.534 [2024-07-22 15:01:06.070089] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b891d0) on tqpair=0x1b50970 00:47:46.534 [2024-07-22 15:01:06.070094] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:47:46.534 [2024-07-22 15:01:06.070101] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.534 [2024-07-22 15:01:06.070105] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.534 [2024-07-22 15:01:06.070108] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b50970) 00:47:46.534 [2024-07-22 15:01:06.070114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.535 [2024-07-22 15:01:06.070126] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b891d0, cid 0, qid 0 00:47:46.535 [2024-07-22 15:01:06.070180] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.535 [2024-07-22 15:01:06.070186] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.535 [2024-07-22 15:01:06.070188] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.535 [2024-07-22 15:01:06.070191] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b891d0) on tqpair=0x1b50970 00:47:46.535 [2024-07-22 15:01:06.070195] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:47:46.535 [2024-07-22 15:01:06.070199] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:47:46.535 [2024-07-22 15:01:06.070205] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:47:46.535 [2024-07-22 15:01:06.070310] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:47:46.535 [2024-07-22 15:01:06.070315] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:47:46.535 [2024-07-22 15:01:06.070321] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.535 [2024-07-22 15:01:06.070324] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.535 [2024-07-22 15:01:06.070327] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b50970) 00:47:46.535 [2024-07-22 15:01:06.070333] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.535 [2024-07-22 15:01:06.070347] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b891d0, cid 0, qid 0 00:47:46.535 [2024-07-22 15:01:06.070397] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.535 [2024-07-22 15:01:06.070403] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.535 [2024-07-22 15:01:06.070406] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.535 [2024-07-22 15:01:06.070409] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b891d0) on tqpair=0x1b50970 00:47:46.535 [2024-07-22 15:01:06.070413] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:47:46.535 [2024-07-22 15:01:06.070420] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.535 [2024-07-22 15:01:06.070423] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.535 [2024-07-22 15:01:06.070426] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b50970) 00:47:46.535 [2024-07-22 15:01:06.070432] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.535 [2024-07-22 15:01:06.070444] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b891d0, cid 0, qid 0 00:47:46.535 [2024-07-22 15:01:06.070502] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.535 [2024-07-22 15:01:06.070508] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.535 [2024-07-22 15:01:06.070510] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.535 [2024-07-22 15:01:06.070513] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b891d0) on tqpair=0x1b50970 00:47:46.535 [2024-07-22 15:01:06.070518] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:47:46.535 [2024-07-22 15:01:06.070532] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:47:46.535 [2024-07-22 15:01:06.070538] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:47:46.535 [2024-07-22 15:01:06.070545] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:47:46.535 [2024-07-22 15:01:06.070552] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.535 [2024-07-22 15:01:06.070554] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b50970) 00:47:46.535 [2024-07-22 15:01:06.070560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.535 [2024-07-22 15:01:06.070572] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b891d0, cid 0, qid 0 00:47:46.535 [2024-07-22 15:01:06.070669] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:47:46.535 [2024-07-22 15:01:06.070675] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:47:46.535 [2024-07-22 15:01:06.070687] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:47:46.535 [2024-07-22 15:01:06.070690] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b50970): datao=0, datal=4096, cccid=0 00:47:46.535 [2024-07-22 15:01:06.070693] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b891d0) on tqpair(0x1b50970): expected_datao=0, payload_size=4096 00:47:46.535 [2024-07-22 15:01:06.070697] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.535 [2024-07-22 15:01:06.070704] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:47:46.535 [2024-07-22 15:01:06.070707] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:47:46.535 [2024-07-22 15:01:06.070714] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.535 [2024-07-22 15:01:06.070719] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.535 [2024-07-22 15:01:06.070721] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.535 [2024-07-22 15:01:06.070724] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b891d0) on tqpair=0x1b50970 00:47:46.535 [2024-07-22 15:01:06.070731] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:47:46.535 [2024-07-22 15:01:06.070735] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:47:46.535 [2024-07-22 15:01:06.070739] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:47:46.535 [2024-07-22 15:01:06.070742] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:47:46.535 [2024-07-22 15:01:06.070746] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:47:46.535 [2024-07-22 15:01:06.070750] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:47:46.535 [2024-07-22 15:01:06.070762] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:47:46.535 [2024-07-22 15:01:06.070770] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.535 [2024-07-22 15:01:06.070773] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.535 [2024-07-22 15:01:06.070776] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b50970) 00:47:46.535 [2024-07-22 15:01:06.070782] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:47:46.535 [2024-07-22 15:01:06.070796] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b891d0, cid 0, qid 0 00:47:46.535 [2024-07-22 15:01:06.070852] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.535 [2024-07-22 15:01:06.070857] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.535 [2024-07-22 15:01:06.070860] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.535 [2024-07-22 15:01:06.070863] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b891d0) on tqpair=0x1b50970 00:47:46.535 [2024-07-22 15:01:06.070869] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.535 [2024-07-22 15:01:06.070872] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.535 [2024-07-22 15:01:06.070875] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b50970) 00:47:46.535 [2024-07-22 15:01:06.070880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:47:46.535 [2024-07-22 15:01:06.070885] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.535 [2024-07-22 15:01:06.070887] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.535 [2024-07-22 15:01:06.070890] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1b50970) 00:47:46.535 [2024-07-22 15:01:06.070894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:47:46.535 [2024-07-22 15:01:06.070899] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.535 [2024-07-22 15:01:06.070902] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.535 [2024-07-22 15:01:06.070904] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1b50970) 00:47:46.535 [2024-07-22 15:01:06.070909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:47:46.535 [2024-07-22 15:01:06.070913] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.535 [2024-07-22 15:01:06.070916] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.535 [2024-07-22 15:01:06.070918] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b50970) 00:47:46.535 [2024-07-22 15:01:06.070923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:47:46.535 [2024-07-22 15:01:06.070942] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:47:46.535 [2024-07-22 15:01:06.070949] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:47:46.535 [2024-07-22 15:01:06.070954] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.535 [2024-07-22 15:01:06.070957] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b50970) 00:47:46.535 [2024-07-22 15:01:06.070963] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.535 [2024-07-22 15:01:06.070980] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b891d0, cid 0, qid 0 00:47:46.535 [2024-07-22 15:01:06.070985] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b89330, cid 1, qid 0 00:47:46.535 [2024-07-22 15:01:06.070989] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b89490, cid 2, qid 0 00:47:46.535 [2024-07-22 15:01:06.070993] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b895f0, cid 3, qid 0 00:47:46.535 [2024-07-22 15:01:06.070997] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b89750, cid 4, qid 0 00:47:46.535 [2024-07-22 15:01:06.071108] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.535 [2024-07-22 15:01:06.071130] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.535 [2024-07-22 15:01:06.071133] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.535 [2024-07-22 15:01:06.071136] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b89750) on tqpair=0x1b50970 00:47:46.535 [2024-07-22 15:01:06.071141] nvme_ctrlr.c:2904:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:47:46.536 [2024-07-22 15:01:06.071145] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:47:46.536 [2024-07-22 15:01:06.071154] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.536 [2024-07-22 15:01:06.071157] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b50970) 00:47:46.536 [2024-07-22 15:01:06.071163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.536 [2024-07-22 15:01:06.071176] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b89750, cid 4, qid 0 00:47:46.536 [2024-07-22 15:01:06.071237] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:47:46.536 [2024-07-22 15:01:06.071242] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:47:46.536 [2024-07-22 15:01:06.071245] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:47:46.536 [2024-07-22 15:01:06.071248] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b50970): datao=0, datal=4096, cccid=4 00:47:46.536 [2024-07-22 15:01:06.071251] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b89750) on tqpair(0x1b50970): expected_datao=0, payload_size=4096 00:47:46.536 [2024-07-22 15:01:06.071254] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.536 [2024-07-22 15:01:06.071260] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:47:46.536 [2024-07-22 15:01:06.071263] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:47:46.536 [2024-07-22 15:01:06.071270] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.536 [2024-07-22 15:01:06.071275] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.536 [2024-07-22 15:01:06.071278] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.536 [2024-07-22 15:01:06.071280] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b89750) on tqpair=0x1b50970 00:47:46.536 [2024-07-22 15:01:06.071291] nvme_ctrlr.c:4038:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:47:46.536 [2024-07-22 15:01:06.071324] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.536 [2024-07-22 15:01:06.071328] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b50970) 00:47:46.536 [2024-07-22 15:01:06.071334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.536 [2024-07-22 15:01:06.071340] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.536 [2024-07-22 15:01:06.071343] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.536 [2024-07-22 15:01:06.071346] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b50970) 00:47:46.536 [2024-07-22 15:01:06.071351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:47:46.536 [2024-07-22 15:01:06.071369] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b89750, cid 4, qid 0 00:47:46.536 [2024-07-22 15:01:06.071374] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b898b0, cid 5, qid 0 00:47:46.536 [2024-07-22 15:01:06.071487] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:47:46.536 [2024-07-22 15:01:06.071499] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:47:46.536 [2024-07-22 15:01:06.071502] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:47:46.536 [2024-07-22 15:01:06.071505] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b50970): datao=0, datal=1024, cccid=4 00:47:46.536 [2024-07-22 15:01:06.071508] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b89750) on tqpair(0x1b50970): expected_datao=0, payload_size=1024 00:47:46.536 [2024-07-22 15:01:06.071512] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.536 [2024-07-22 15:01:06.071518] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:47:46.536 [2024-07-22 15:01:06.071521] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:47:46.536 [2024-07-22 15:01:06.071525] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.536 [2024-07-22 15:01:06.071531] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.536 [2024-07-22 15:01:06.071533] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.536 [2024-07-22 15:01:06.071536] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b898b0) on tqpair=0x1b50970 00:47:46.536 [2024-07-22 15:01:06.111736] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.536 [2024-07-22 15:01:06.111764] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.536 [2024-07-22 15:01:06.111768] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.536 [2024-07-22 15:01:06.111772] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b89750) on tqpair=0x1b50970 00:47:46.536 [2024-07-22 15:01:06.111790] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.536 [2024-07-22 15:01:06.111793] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b50970) 00:47:46.536 [2024-07-22 15:01:06.111802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.536 [2024-07-22 15:01:06.111833] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b89750, cid 4, qid 0 00:47:46.536 [2024-07-22 15:01:06.111918] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:47:46.536 [2024-07-22 15:01:06.111924] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:47:46.536 [2024-07-22 15:01:06.111927] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:47:46.536 [2024-07-22 15:01:06.111930] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b50970): datao=0, datal=3072, cccid=4 00:47:46.536 [2024-07-22 15:01:06.111934] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b89750) on tqpair(0x1b50970): expected_datao=0, payload_size=3072 00:47:46.536 [2024-07-22 15:01:06.111937] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.536 [2024-07-22 15:01:06.111944] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:47:46.536 [2024-07-22 15:01:06.111947] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:47:46.536 [2024-07-22 15:01:06.111954] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.536 [2024-07-22 15:01:06.111959] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.536 [2024-07-22 15:01:06.111962] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.536 [2024-07-22 15:01:06.111965] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b89750) on tqpair=0x1b50970 00:47:46.536 [2024-07-22 15:01:06.111973] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.536 [2024-07-22 15:01:06.111976] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b50970) 00:47:46.536 [2024-07-22 15:01:06.111982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.536 [2024-07-22 15:01:06.112000] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b89750, cid 4, qid 0 00:47:46.536 [2024-07-22 15:01:06.112067] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:47:46.536 [2024-07-22 15:01:06.112073] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:47:46.536 [2024-07-22 15:01:06.112075] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:47:46.536 [2024-07-22 15:01:06.112078] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b50970): datao=0, datal=8, cccid=4 00:47:46.536 [2024-07-22 15:01:06.112082] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b89750) on tqpair(0x1b50970): expected_datao=0, payload_size=8 00:47:46.536 [2024-07-22 15:01:06.112085] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.536 [2024-07-22 15:01:06.112090] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:47:46.536 [2024-07-22 15:01:06.112093] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:47:46.536 ===================================================== 00:47:46.536 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:47:46.536 ===================================================== 00:47:46.536 Controller Capabilities/Features 00:47:46.536 ================================ 00:47:46.536 Vendor ID: 0000 00:47:46.536 Subsystem Vendor ID: 0000 00:47:46.536 Serial Number: .................... 00:47:46.536 Model Number: ........................................ 00:47:46.536 Firmware Version: 24.05.1 00:47:46.536 Recommended Arb Burst: 0 00:47:46.536 IEEE OUI Identifier: 00 00 00 00:47:46.536 Multi-path I/O 00:47:46.536 May have multiple subsystem ports: No 00:47:46.536 May have multiple controllers: No 00:47:46.536 Associated with SR-IOV VF: No 00:47:46.536 Max Data Transfer Size: 131072 00:47:46.536 Max Number of Namespaces: 0 00:47:46.536 Max Number of I/O Queues: 1024 00:47:46.536 NVMe Specification Version (VS): 1.3 00:47:46.536 NVMe Specification Version (Identify): 1.3 00:47:46.536 Maximum Queue Entries: 128 00:47:46.536 Contiguous Queues Required: Yes 00:47:46.536 Arbitration Mechanisms Supported 00:47:46.536 Weighted Round Robin: Not Supported 00:47:46.536 Vendor Specific: Not Supported 00:47:46.536 Reset Timeout: 15000 ms 00:47:46.536 Doorbell Stride: 4 bytes 00:47:46.536 NVM Subsystem Reset: Not Supported 00:47:46.536 Command Sets Supported 00:47:46.536 NVM Command Set: Supported 00:47:46.536 Boot Partition: Not Supported 00:47:46.536 Memory Page Size Minimum: 4096 bytes 00:47:46.536 Memory Page Size Maximum: 4096 bytes 00:47:46.536 Persistent Memory Region: Not Supported 00:47:46.536 Optional Asynchronous Events Supported 00:47:46.536 Namespace Attribute Notices: Not Supported 00:47:46.536 Firmware Activation Notices: Not Supported 00:47:46.536 ANA Change Notices: Not Supported 00:47:46.536 PLE Aggregate Log Change Notices: Not Supported 00:47:46.536 LBA Status Info Alert Notices: Not Supported 00:47:46.536 EGE Aggregate Log Change Notices: Not Supported 00:47:46.536 Normal NVM Subsystem Shutdown event: Not Supported 00:47:46.536 Zone Descriptor Change Notices: Not Supported 00:47:46.536 Discovery Log Change Notices: Supported 00:47:46.536 Controller Attributes 00:47:46.536 128-bit Host Identifier: Not Supported 00:47:46.536 Non-Operational Permissive Mode: Not Supported 00:47:46.536 NVM Sets: Not Supported 00:47:46.536 Read Recovery Levels: Not Supported 00:47:46.536 Endurance Groups: Not Supported 00:47:46.536 Predictable Latency Mode: Not Supported 00:47:46.536 Traffic Based Keep ALive: Not Supported 00:47:46.536 Namespace Granularity: Not Supported 00:47:46.536 SQ Associations: Not Supported 00:47:46.536 UUID List: Not Supported 00:47:46.536 Multi-Domain Subsystem: Not Supported 00:47:46.536 Fixed Capacity Management: Not Supported 00:47:46.537 Variable Capacity Management: Not Supported 00:47:46.537 Delete Endurance Group: Not Supported 00:47:46.537 Delete NVM Set: Not Supported 00:47:46.537 Extended LBA Formats Supported: Not Supported 00:47:46.537 Flexible Data Placement Supported: Not Supported 00:47:46.537 00:47:46.537 Controller Memory Buffer Support 00:47:46.537 ================================ 00:47:46.537 Supported: No 00:47:46.537 00:47:46.537 Persistent Memory Region Support 00:47:46.537 ================================ 00:47:46.537 Supported: No 00:47:46.537 00:47:46.537 Admin Command Set Attributes 00:47:46.537 ============================ 00:47:46.537 Security Send/Receive: Not Supported 00:47:46.537 Format NVM: Not Supported 00:47:46.537 Firmware Activate/Download: Not Supported 00:47:46.537 Namespace Management: Not Supported 00:47:46.537 Device Self-Test: Not Supported 00:47:46.537 Directives: Not Supported 00:47:46.537 NVMe-MI: Not Supported 00:47:46.537 Virtualization Management: Not Supported 00:47:46.537 Doorbell Buffer Config: Not Supported 00:47:46.537 Get LBA Status Capability: Not Supported 00:47:46.537 Command & Feature Lockdown Capability: Not Supported 00:47:46.537 Abort Command Limit: 1 00:47:46.537 Async Event Request Limit: 4 00:47:46.537 Number of Firmware Slots: N/A 00:47:46.537 Firmware Slot 1 Read-Only: N/A 00:47:46.537 Fi[2024-07-22 15:01:06.152738] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.537 [2024-07-22 15:01:06.152768] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.537 [2024-07-22 15:01:06.152774] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.537 [2024-07-22 15:01:06.152780] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b89750) on tqpair=0x1b50970 00:47:46.537 rmware Activation Without Reset: N/A 00:47:46.537 Multiple Update Detection Support: N/A 00:47:46.537 Firmware Update Granularity: No Information Provided 00:47:46.537 Per-Namespace SMART Log: No 00:47:46.537 Asymmetric Namespace Access Log Page: Not Supported 00:47:46.537 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:47:46.537 Command Effects Log Page: Not Supported 00:47:46.537 Get Log Page Extended Data: Supported 00:47:46.537 Telemetry Log Pages: Not Supported 00:47:46.537 Persistent Event Log Pages: Not Supported 00:47:46.537 Supported Log Pages Log Page: May Support 00:47:46.537 Commands Supported & Effects Log Page: Not Supported 00:47:46.537 Feature Identifiers & Effects Log Page:May Support 00:47:46.537 NVMe-MI Commands & Effects Log Page: May Support 00:47:46.537 Data Area 4 for Telemetry Log: Not Supported 00:47:46.537 Error Log Page Entries Supported: 128 00:47:46.537 Keep Alive: Not Supported 00:47:46.537 00:47:46.537 NVM Command Set Attributes 00:47:46.537 ========================== 00:47:46.537 Submission Queue Entry Size 00:47:46.537 Max: 1 00:47:46.537 Min: 1 00:47:46.537 Completion Queue Entry Size 00:47:46.537 Max: 1 00:47:46.537 Min: 1 00:47:46.537 Number of Namespaces: 0 00:47:46.537 Compare Command: Not Supported 00:47:46.537 Write Uncorrectable Command: Not Supported 00:47:46.537 Dataset Management Command: Not Supported 00:47:46.537 Write Zeroes Command: Not Supported 00:47:46.537 Set Features Save Field: Not Supported 00:47:46.537 Reservations: Not Supported 00:47:46.537 Timestamp: Not Supported 00:47:46.537 Copy: Not Supported 00:47:46.537 Volatile Write Cache: Not Present 00:47:46.537 Atomic Write Unit (Normal): 1 00:47:46.537 Atomic Write Unit (PFail): 1 00:47:46.537 Atomic Compare & Write Unit: 1 00:47:46.537 Fused Compare & Write: Supported 00:47:46.537 Scatter-Gather List 00:47:46.537 SGL Command Set: Supported 00:47:46.537 SGL Keyed: Supported 00:47:46.537 SGL Bit Bucket Descriptor: Not Supported 00:47:46.537 SGL Metadata Pointer: Not Supported 00:47:46.537 Oversized SGL: Not Supported 00:47:46.537 SGL Metadata Address: Not Supported 00:47:46.537 SGL Offset: Supported 00:47:46.537 Transport SGL Data Block: Not Supported 00:47:46.537 Replay Protected Memory Block: Not Supported 00:47:46.537 00:47:46.537 Firmware Slot Information 00:47:46.537 ========================= 00:47:46.537 Active slot: 0 00:47:46.537 00:47:46.537 00:47:46.537 Error Log 00:47:46.537 ========= 00:47:46.537 00:47:46.537 Active Namespaces 00:47:46.537 ================= 00:47:46.537 Discovery Log Page 00:47:46.537 ================== 00:47:46.537 Generation Counter: 2 00:47:46.537 Number of Records: 2 00:47:46.537 Record Format: 0 00:47:46.537 00:47:46.537 Discovery Log Entry 0 00:47:46.537 ---------------------- 00:47:46.537 Transport Type: 3 (TCP) 00:47:46.537 Address Family: 1 (IPv4) 00:47:46.537 Subsystem Type: 3 (Current Discovery Subsystem) 00:47:46.537 Entry Flags: 00:47:46.537 Duplicate Returned Information: 1 00:47:46.537 Explicit Persistent Connection Support for Discovery: 1 00:47:46.537 Transport Requirements: 00:47:46.537 Secure Channel: Not Required 00:47:46.537 Port ID: 0 (0x0000) 00:47:46.537 Controller ID: 65535 (0xffff) 00:47:46.537 Admin Max SQ Size: 128 00:47:46.537 Transport Service Identifier: 4420 00:47:46.537 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:47:46.537 Transport Address: 10.0.0.2 00:47:46.537 Discovery Log Entry 1 00:47:46.537 ---------------------- 00:47:46.537 Transport Type: 3 (TCP) 00:47:46.537 Address Family: 1 (IPv4) 00:47:46.537 Subsystem Type: 2 (NVM Subsystem) 00:47:46.537 Entry Flags: 00:47:46.537 Duplicate Returned Information: 0 00:47:46.537 Explicit Persistent Connection Support for Discovery: 0 00:47:46.537 Transport Requirements: 00:47:46.537 Secure Channel: Not Required 00:47:46.537 Port ID: 0 (0x0000) 00:47:46.537 Controller ID: 65535 (0xffff) 00:47:46.537 Admin Max SQ Size: 128 00:47:46.537 Transport Service Identifier: 4420 00:47:46.537 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:47:46.537 Transport Address: 10.0.0.2 [2024-07-22 15:01:06.152909] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:47:46.537 [2024-07-22 15:01:06.152923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:46.537 [2024-07-22 15:01:06.152929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:46.537 [2024-07-22 15:01:06.152934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:46.537 [2024-07-22 15:01:06.152939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:46.537 [2024-07-22 15:01:06.152949] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.537 [2024-07-22 15:01:06.152952] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.537 [2024-07-22 15:01:06.152955] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b50970) 00:47:46.537 [2024-07-22 15:01:06.152964] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.537 [2024-07-22 15:01:06.152986] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b895f0, cid 3, qid 0 00:47:46.537 [2024-07-22 15:01:06.153044] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.537 [2024-07-22 15:01:06.153049] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.537 [2024-07-22 15:01:06.153052] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.537 [2024-07-22 15:01:06.153055] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b895f0) on tqpair=0x1b50970 00:47:46.537 [2024-07-22 15:01:06.153062] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.537 [2024-07-22 15:01:06.153065] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.537 [2024-07-22 15:01:06.153068] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b50970) 00:47:46.537 [2024-07-22 15:01:06.153074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.537 [2024-07-22 15:01:06.153090] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b895f0, cid 3, qid 0 00:47:46.537 [2024-07-22 15:01:06.153173] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.537 [2024-07-22 15:01:06.153178] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.537 [2024-07-22 15:01:06.153181] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.537 [2024-07-22 15:01:06.153184] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b895f0) on tqpair=0x1b50970 00:47:46.537 [2024-07-22 15:01:06.153192] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:47:46.537 [2024-07-22 15:01:06.153196] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:47:46.537 [2024-07-22 15:01:06.153204] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.537 [2024-07-22 15:01:06.153207] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.537 [2024-07-22 15:01:06.153210] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b50970) 00:47:46.537 [2024-07-22 15:01:06.153216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.538 [2024-07-22 15:01:06.153229] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b895f0, cid 3, qid 0 00:47:46.538 [2024-07-22 15:01:06.153277] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.538 [2024-07-22 15:01:06.153283] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.538 [2024-07-22 15:01:06.153285] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.538 [2024-07-22 15:01:06.153288] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b895f0) on tqpair=0x1b50970 00:47:46.538 [2024-07-22 15:01:06.153298] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.538 [2024-07-22 15:01:06.153301] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.538 [2024-07-22 15:01:06.153304] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b50970) 00:47:46.538 [2024-07-22 15:01:06.153309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.538 [2024-07-22 15:01:06.153322] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b895f0, cid 3, qid 0 00:47:46.538 [2024-07-22 15:01:06.153366] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.538 [2024-07-22 15:01:06.153372] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.538 [2024-07-22 15:01:06.153374] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.538 [2024-07-22 15:01:06.153377] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b895f0) on tqpair=0x1b50970 00:47:46.538 [2024-07-22 15:01:06.153386] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.538 [2024-07-22 15:01:06.153389] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.538 [2024-07-22 15:01:06.153392] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b50970) 00:47:46.538 [2024-07-22 15:01:06.153398] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.538 [2024-07-22 15:01:06.153410] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b895f0, cid 3, qid 0 00:47:46.538 [2024-07-22 15:01:06.153460] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.538 [2024-07-22 15:01:06.153466] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.538 [2024-07-22 15:01:06.153468] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.538 [2024-07-22 15:01:06.153471] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b895f0) on tqpair=0x1b50970 00:47:46.538 [2024-07-22 15:01:06.153479] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.538 [2024-07-22 15:01:06.153483] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.538 [2024-07-22 15:01:06.153486] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b50970) 00:47:46.538 [2024-07-22 15:01:06.153491] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.538 [2024-07-22 15:01:06.153504] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b895f0, cid 3, qid 0 00:47:46.538 [2024-07-22 15:01:06.153562] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.538 [2024-07-22 15:01:06.153567] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.538 [2024-07-22 15:01:06.153569] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.538 [2024-07-22 15:01:06.153573] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b895f0) on tqpair=0x1b50970 00:47:46.538 [2024-07-22 15:01:06.153581] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.538 [2024-07-22 15:01:06.153584] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.538 [2024-07-22 15:01:06.153587] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b50970) 00:47:46.538 [2024-07-22 15:01:06.153593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.538 [2024-07-22 15:01:06.153605] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b895f0, cid 3, qid 0 00:47:46.538 [2024-07-22 15:01:06.153658] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.538 [2024-07-22 15:01:06.153663] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.538 [2024-07-22 15:01:06.153666] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.538 [2024-07-22 15:01:06.153679] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b895f0) on tqpair=0x1b50970 00:47:46.538 [2024-07-22 15:01:06.153688] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.538 [2024-07-22 15:01:06.153691] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.538 [2024-07-22 15:01:06.153694] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b50970) 00:47:46.538 [2024-07-22 15:01:06.153700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.538 [2024-07-22 15:01:06.153713] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b895f0, cid 3, qid 0 00:47:46.538 [2024-07-22 15:01:06.153775] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.538 [2024-07-22 15:01:06.153784] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.538 [2024-07-22 15:01:06.153787] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.538 [2024-07-22 15:01:06.153790] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b895f0) on tqpair=0x1b50970 00:47:46.538 [2024-07-22 15:01:06.153799] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.538 [2024-07-22 15:01:06.153802] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.538 [2024-07-22 15:01:06.153805] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b50970) 00:47:46.538 [2024-07-22 15:01:06.153811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.538 [2024-07-22 15:01:06.153824] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b895f0, cid 3, qid 0 00:47:46.538 [2024-07-22 15:01:06.153874] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.538 [2024-07-22 15:01:06.153879] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.538 [2024-07-22 15:01:06.153882] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.538 [2024-07-22 15:01:06.153884] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b895f0) on tqpair=0x1b50970 00:47:46.538 [2024-07-22 15:01:06.153893] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.538 [2024-07-22 15:01:06.153896] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.538 [2024-07-22 15:01:06.153899] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b50970) 00:47:46.538 [2024-07-22 15:01:06.153905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.538 [2024-07-22 15:01:06.153917] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b895f0, cid 3, qid 0 00:47:46.538 [2024-07-22 15:01:06.153977] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.538 [2024-07-22 15:01:06.153982] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.538 [2024-07-22 15:01:06.153985] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.538 [2024-07-22 15:01:06.153988] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b895f0) on tqpair=0x1b50970 00:47:46.538 [2024-07-22 15:01:06.153996] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.538 [2024-07-22 15:01:06.153999] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.538 [2024-07-22 15:01:06.154002] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b50970) 00:47:46.538 [2024-07-22 15:01:06.154008] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.538 [2024-07-22 15:01:06.154020] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b895f0, cid 3, qid 0 00:47:46.538 [2024-07-22 15:01:06.154083] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.538 [2024-07-22 15:01:06.154090] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.538 [2024-07-22 15:01:06.154093] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.538 [2024-07-22 15:01:06.154095] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b895f0) on tqpair=0x1b50970 00:47:46.538 [2024-07-22 15:01:06.154104] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.538 [2024-07-22 15:01:06.154107] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.538 [2024-07-22 15:01:06.154110] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b50970) 00:47:46.538 [2024-07-22 15:01:06.154116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.538 [2024-07-22 15:01:06.154130] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b895f0, cid 3, qid 0 00:47:46.538 [2024-07-22 15:01:06.154180] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.538 [2024-07-22 15:01:06.154186] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.538 [2024-07-22 15:01:06.154188] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.538 [2024-07-22 15:01:06.154191] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b895f0) on tqpair=0x1b50970 00:47:46.538 [2024-07-22 15:01:06.154199] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.538 [2024-07-22 15:01:06.154203] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.538 [2024-07-22 15:01:06.154205] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b50970) 00:47:46.538 [2024-07-22 15:01:06.154211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.538 [2024-07-22 15:01:06.154224] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b895f0, cid 3, qid 0 00:47:46.538 [2024-07-22 15:01:06.154279] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.538 [2024-07-22 15:01:06.154285] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.538 [2024-07-22 15:01:06.154287] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.538 [2024-07-22 15:01:06.154290] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b895f0) on tqpair=0x1b50970 00:47:46.538 [2024-07-22 15:01:06.154298] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.538 [2024-07-22 15:01:06.154302] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.538 [2024-07-22 15:01:06.154305] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b50970) 00:47:46.538 [2024-07-22 15:01:06.154310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.539 [2024-07-22 15:01:06.154324] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b895f0, cid 3, qid 0 00:47:46.539 [2024-07-22 15:01:06.154371] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.539 [2024-07-22 15:01:06.154377] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.539 [2024-07-22 15:01:06.154379] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.539 [2024-07-22 15:01:06.154382] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b895f0) on tqpair=0x1b50970 00:47:46.539 [2024-07-22 15:01:06.154390] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.539 [2024-07-22 15:01:06.154394] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.539 [2024-07-22 15:01:06.154397] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b50970) 00:47:46.539 [2024-07-22 15:01:06.154402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.539 [2024-07-22 15:01:06.154415] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b895f0, cid 3, qid 0 00:47:46.539 [2024-07-22 15:01:06.154470] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.539 [2024-07-22 15:01:06.154475] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.539 [2024-07-22 15:01:06.154478] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.539 [2024-07-22 15:01:06.154480] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b895f0) on tqpair=0x1b50970 00:47:46.539 [2024-07-22 15:01:06.154489] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.539 [2024-07-22 15:01:06.154492] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.539 [2024-07-22 15:01:06.154495] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b50970) 00:47:46.539 [2024-07-22 15:01:06.154501] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.539 [2024-07-22 15:01:06.154513] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b895f0, cid 3, qid 0 00:47:46.539 [2024-07-22 15:01:06.154561] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.539 [2024-07-22 15:01:06.154566] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.539 [2024-07-22 15:01:06.154569] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.539 [2024-07-22 15:01:06.154572] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b895f0) on tqpair=0x1b50970 00:47:46.539 [2024-07-22 15:01:06.154580] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.539 [2024-07-22 15:01:06.154583] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.539 [2024-07-22 15:01:06.154586] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b50970) 00:47:46.539 [2024-07-22 15:01:06.154592] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.539 [2024-07-22 15:01:06.154605] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b895f0, cid 3, qid 0 00:47:46.539 [2024-07-22 15:01:06.154659] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.539 [2024-07-22 15:01:06.154665] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.539 [2024-07-22 15:01:06.154677] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.539 [2024-07-22 15:01:06.154680] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b895f0) on tqpair=0x1b50970 00:47:46.539 [2024-07-22 15:01:06.154688] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.539 [2024-07-22 15:01:06.154692] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.539 [2024-07-22 15:01:06.154694] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b50970) 00:47:46.539 [2024-07-22 15:01:06.154700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.539 [2024-07-22 15:01:06.154714] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b895f0, cid 3, qid 0 00:47:46.539 [2024-07-22 15:01:06.154763] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.539 [2024-07-22 15:01:06.154768] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.539 [2024-07-22 15:01:06.154771] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.539 [2024-07-22 15:01:06.154773] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b895f0) on tqpair=0x1b50970 00:47:46.539 [2024-07-22 15:01:06.154782] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.539 [2024-07-22 15:01:06.154785] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.539 [2024-07-22 15:01:06.154788] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b50970) 00:47:46.539 [2024-07-22 15:01:06.154794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.539 [2024-07-22 15:01:06.154806] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b895f0, cid 3, qid 0 00:47:46.539 [2024-07-22 15:01:06.154861] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.539 [2024-07-22 15:01:06.154866] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.539 [2024-07-22 15:01:06.154868] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.539 [2024-07-22 15:01:06.154871] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b895f0) on tqpair=0x1b50970 00:47:46.539 [2024-07-22 15:01:06.154880] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.539 [2024-07-22 15:01:06.154883] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.539 [2024-07-22 15:01:06.154886] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b50970) 00:47:46.539 [2024-07-22 15:01:06.154891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.539 [2024-07-22 15:01:06.154904] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b895f0, cid 3, qid 0 00:47:46.539 [2024-07-22 15:01:06.154952] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.539 [2024-07-22 15:01:06.154957] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.539 [2024-07-22 15:01:06.154960] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.539 [2024-07-22 15:01:06.154963] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b895f0) on tqpair=0x1b50970 00:47:46.539 [2024-07-22 15:01:06.154971] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.539 [2024-07-22 15:01:06.154974] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.539 [2024-07-22 15:01:06.154977] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b50970) 00:47:46.539 [2024-07-22 15:01:06.154983] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.539 [2024-07-22 15:01:06.154995] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b895f0, cid 3, qid 0 00:47:46.539 [2024-07-22 15:01:06.155056] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.539 [2024-07-22 15:01:06.155061] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.539 [2024-07-22 15:01:06.155064] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.539 [2024-07-22 15:01:06.155066] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b895f0) on tqpair=0x1b50970 00:47:46.539 [2024-07-22 15:01:06.155074] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.539 [2024-07-22 15:01:06.155078] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.539 [2024-07-22 15:01:06.155081] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b50970) 00:47:46.539 [2024-07-22 15:01:06.155086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.539 [2024-07-22 15:01:06.155099] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b895f0, cid 3, qid 0 00:47:46.539 [2024-07-22 15:01:06.155149] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.539 [2024-07-22 15:01:06.155154] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.539 [2024-07-22 15:01:06.155157] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.539 [2024-07-22 15:01:06.155160] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b895f0) on tqpair=0x1b50970 00:47:46.539 [2024-07-22 15:01:06.155168] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.539 [2024-07-22 15:01:06.155172] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.539 [2024-07-22 15:01:06.155174] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b50970) 00:47:46.539 [2024-07-22 15:01:06.155180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.539 [2024-07-22 15:01:06.155193] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b895f0, cid 3, qid 0 00:47:46.539 [2024-07-22 15:01:06.155245] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.539 [2024-07-22 15:01:06.155250] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.539 [2024-07-22 15:01:06.155253] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.539 [2024-07-22 15:01:06.155256] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b895f0) on tqpair=0x1b50970 00:47:46.539 [2024-07-22 15:01:06.155264] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.539 [2024-07-22 15:01:06.155267] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.539 [2024-07-22 15:01:06.155270] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b50970) 00:47:46.539 [2024-07-22 15:01:06.155276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.539 [2024-07-22 15:01:06.155288] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b895f0, cid 3, qid 0 00:47:46.539 [2024-07-22 15:01:06.155342] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.540 [2024-07-22 15:01:06.155347] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.540 [2024-07-22 15:01:06.155350] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.540 [2024-07-22 15:01:06.155352] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b895f0) on tqpair=0x1b50970 00:47:46.540 [2024-07-22 15:01:06.155361] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.540 [2024-07-22 15:01:06.155364] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.540 [2024-07-22 15:01:06.155367] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b50970) 00:47:46.540 [2024-07-22 15:01:06.155373] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.540 [2024-07-22 15:01:06.155385] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b895f0, cid 3, qid 0 00:47:46.540 [2024-07-22 15:01:06.155436] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.540 [2024-07-22 15:01:06.155441] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.540 [2024-07-22 15:01:06.155444] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.540 [2024-07-22 15:01:06.155447] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b895f0) on tqpair=0x1b50970 00:47:46.540 [2024-07-22 15:01:06.155455] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.540 [2024-07-22 15:01:06.155459] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.540 [2024-07-22 15:01:06.155461] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b50970) 00:47:46.540 [2024-07-22 15:01:06.155467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.540 [2024-07-22 15:01:06.155480] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b895f0, cid 3, qid 0 00:47:46.540 [2024-07-22 15:01:06.155525] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.540 [2024-07-22 15:01:06.155530] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.540 [2024-07-22 15:01:06.155533] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.540 [2024-07-22 15:01:06.155536] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b895f0) on tqpair=0x1b50970 00:47:46.540 [2024-07-22 15:01:06.155545] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.540 [2024-07-22 15:01:06.155548] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.540 [2024-07-22 15:01:06.155551] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b50970) 00:47:46.540 [2024-07-22 15:01:06.155557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.540 [2024-07-22 15:01:06.155569] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b895f0, cid 3, qid 0 00:47:46.540 [2024-07-22 15:01:06.155626] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.540 [2024-07-22 15:01:06.155632] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.540 [2024-07-22 15:01:06.155634] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.540 [2024-07-22 15:01:06.155637] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b895f0) on tqpair=0x1b50970 00:47:46.540 [2024-07-22 15:01:06.155645] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.540 [2024-07-22 15:01:06.155648] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.540 [2024-07-22 15:01:06.155651] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b50970) 00:47:46.540 [2024-07-22 15:01:06.155657] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.540 [2024-07-22 15:01:06.159682] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b895f0, cid 3, qid 0 00:47:46.540 [2024-07-22 15:01:06.159699] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.540 [2024-07-22 15:01:06.159705] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.540 [2024-07-22 15:01:06.159707] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.540 [2024-07-22 15:01:06.159709] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b895f0) on tqpair=0x1b50970 00:47:46.540 [2024-07-22 15:01:06.159720] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.540 [2024-07-22 15:01:06.159723] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.540 [2024-07-22 15:01:06.159726] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b50970) 00:47:46.540 [2024-07-22 15:01:06.159732] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.540 [2024-07-22 15:01:06.159750] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b895f0, cid 3, qid 0 00:47:46.540 [2024-07-22 15:01:06.159795] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.540 [2024-07-22 15:01:06.159799] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.540 [2024-07-22 15:01:06.159801] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.540 [2024-07-22 15:01:06.159804] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b895f0) on tqpair=0x1b50970 00:47:46.540 [2024-07-22 15:01:06.159809] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:47:46.804 00:47:46.804 15:01:06 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:47:46.804 [2024-07-22 15:01:06.198282] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:47:46.804 [2024-07-22 15:01:06.198321] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103880 ] 00:47:46.804 [2024-07-22 15:01:06.327563] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:47:46.804 [2024-07-22 15:01:06.327614] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:47:46.804 [2024-07-22 15:01:06.327618] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:47:46.804 [2024-07-22 15:01:06.327629] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:47:46.804 [2024-07-22 15:01:06.327636] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:47:46.804 [2024-07-22 15:01:06.334801] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:47:46.804 [2024-07-22 15:01:06.334845] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x9d1970 0 00:47:46.804 [2024-07-22 15:01:06.342753] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:47:46.804 [2024-07-22 15:01:06.342771] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:47:46.804 [2024-07-22 15:01:06.342775] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:47:46.805 [2024-07-22 15:01:06.342778] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:47:46.805 [2024-07-22 15:01:06.342817] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.805 [2024-07-22 15:01:06.342823] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.805 [2024-07-22 15:01:06.342826] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9d1970) 00:47:46.805 [2024-07-22 15:01:06.342839] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:47:46.805 [2024-07-22 15:01:06.342868] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa0a1d0, cid 0, qid 0 00:47:46.805 [2024-07-22 15:01:06.350703] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.805 [2024-07-22 15:01:06.350717] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.805 [2024-07-22 15:01:06.350720] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.805 [2024-07-22 15:01:06.350723] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa0a1d0) on tqpair=0x9d1970 00:47:46.805 [2024-07-22 15:01:06.350732] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:47:46.805 [2024-07-22 15:01:06.350737] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:47:46.805 [2024-07-22 15:01:06.350742] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:47:46.805 [2024-07-22 15:01:06.350754] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.805 [2024-07-22 15:01:06.350757] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.805 [2024-07-22 15:01:06.350759] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9d1970) 00:47:46.805 [2024-07-22 15:01:06.350765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.805 [2024-07-22 15:01:06.350784] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa0a1d0, cid 0, qid 0 00:47:46.805 [2024-07-22 15:01:06.350847] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.805 [2024-07-22 15:01:06.350857] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.805 [2024-07-22 15:01:06.350860] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.805 [2024-07-22 15:01:06.350862] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa0a1d0) on tqpair=0x9d1970 00:47:46.805 [2024-07-22 15:01:06.350866] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:47:46.805 [2024-07-22 15:01:06.350871] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:47:46.805 [2024-07-22 15:01:06.350875] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.805 [2024-07-22 15:01:06.350878] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.805 [2024-07-22 15:01:06.350880] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9d1970) 00:47:46.805 [2024-07-22 15:01:06.350885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.805 [2024-07-22 15:01:06.350898] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa0a1d0, cid 0, qid 0 00:47:46.805 [2024-07-22 15:01:06.350937] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.805 [2024-07-22 15:01:06.350942] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.805 [2024-07-22 15:01:06.350944] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.805 [2024-07-22 15:01:06.350946] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa0a1d0) on tqpair=0x9d1970 00:47:46.805 [2024-07-22 15:01:06.350950] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:47:46.805 [2024-07-22 15:01:06.350956] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:47:46.805 [2024-07-22 15:01:06.350961] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.805 [2024-07-22 15:01:06.350963] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.805 [2024-07-22 15:01:06.350966] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9d1970) 00:47:46.805 [2024-07-22 15:01:06.350970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.805 [2024-07-22 15:01:06.350981] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa0a1d0, cid 0, qid 0 00:47:46.805 [2024-07-22 15:01:06.351029] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.805 [2024-07-22 15:01:06.351033] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.805 [2024-07-22 15:01:06.351035] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.805 [2024-07-22 15:01:06.351039] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa0a1d0) on tqpair=0x9d1970 00:47:46.805 [2024-07-22 15:01:06.351044] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:47:46.805 [2024-07-22 15:01:06.351053] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.805 [2024-07-22 15:01:06.351057] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.805 [2024-07-22 15:01:06.351060] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9d1970) 00:47:46.805 [2024-07-22 15:01:06.351065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.805 [2024-07-22 15:01:06.351076] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa0a1d0, cid 0, qid 0 00:47:46.805 [2024-07-22 15:01:06.351121] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.805 [2024-07-22 15:01:06.351125] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.805 [2024-07-22 15:01:06.351127] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.805 [2024-07-22 15:01:06.351130] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa0a1d0) on tqpair=0x9d1970 00:47:46.805 [2024-07-22 15:01:06.351133] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:47:46.805 [2024-07-22 15:01:06.351136] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:47:46.805 [2024-07-22 15:01:06.351142] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:47:46.805 [2024-07-22 15:01:06.351256] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:47:46.805 [2024-07-22 15:01:06.351264] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:47:46.805 [2024-07-22 15:01:06.351271] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.805 [2024-07-22 15:01:06.351274] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.805 [2024-07-22 15:01:06.351277] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9d1970) 00:47:46.805 [2024-07-22 15:01:06.351283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.805 [2024-07-22 15:01:06.351296] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa0a1d0, cid 0, qid 0 00:47:46.805 [2024-07-22 15:01:06.351354] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.805 [2024-07-22 15:01:06.351359] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.805 [2024-07-22 15:01:06.351362] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.805 [2024-07-22 15:01:06.351365] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa0a1d0) on tqpair=0x9d1970 00:47:46.805 [2024-07-22 15:01:06.351368] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:47:46.805 [2024-07-22 15:01:06.351376] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.805 [2024-07-22 15:01:06.351380] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.805 [2024-07-22 15:01:06.351382] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9d1970) 00:47:46.805 [2024-07-22 15:01:06.351388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.805 [2024-07-22 15:01:06.351400] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa0a1d0, cid 0, qid 0 00:47:46.805 [2024-07-22 15:01:06.351452] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.805 [2024-07-22 15:01:06.351457] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.805 [2024-07-22 15:01:06.351459] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.805 [2024-07-22 15:01:06.351462] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa0a1d0) on tqpair=0x9d1970 00:47:46.805 [2024-07-22 15:01:06.351466] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:47:46.805 [2024-07-22 15:01:06.351469] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:47:46.805 [2024-07-22 15:01:06.351476] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:47:46.805 [2024-07-22 15:01:06.351483] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:47:46.805 [2024-07-22 15:01:06.351491] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.805 [2024-07-22 15:01:06.351494] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9d1970) 00:47:46.805 [2024-07-22 15:01:06.351500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.805 [2024-07-22 15:01:06.351512] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa0a1d0, cid 0, qid 0 00:47:46.805 [2024-07-22 15:01:06.351605] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:47:46.805 [2024-07-22 15:01:06.351614] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:47:46.805 [2024-07-22 15:01:06.351617] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:47:46.805 [2024-07-22 15:01:06.351620] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9d1970): datao=0, datal=4096, cccid=0 00:47:46.805 [2024-07-22 15:01:06.351624] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa0a1d0) on tqpair(0x9d1970): expected_datao=0, payload_size=4096 00:47:46.805 [2024-07-22 15:01:06.351628] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.805 [2024-07-22 15:01:06.351635] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:47:46.805 [2024-07-22 15:01:06.351638] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:47:46.805 [2024-07-22 15:01:06.351645] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.805 [2024-07-22 15:01:06.351650] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.805 [2024-07-22 15:01:06.351653] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.805 [2024-07-22 15:01:06.351656] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa0a1d0) on tqpair=0x9d1970 00:47:46.806 [2024-07-22 15:01:06.351662] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:47:46.806 [2024-07-22 15:01:06.351666] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:47:46.806 [2024-07-22 15:01:06.351680] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:47:46.806 [2024-07-22 15:01:06.351684] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:47:46.806 [2024-07-22 15:01:06.351687] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:47:46.806 [2024-07-22 15:01:06.351691] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:47:46.806 [2024-07-22 15:01:06.351701] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:47:46.806 [2024-07-22 15:01:06.351707] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.806 [2024-07-22 15:01:06.351711] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.806 [2024-07-22 15:01:06.351713] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9d1970) 00:47:46.806 [2024-07-22 15:01:06.351719] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:47:46.806 [2024-07-22 15:01:06.351735] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa0a1d0, cid 0, qid 0 00:47:46.806 [2024-07-22 15:01:06.351796] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.806 [2024-07-22 15:01:06.351805] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.806 [2024-07-22 15:01:06.351808] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.806 [2024-07-22 15:01:06.351811] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa0a1d0) on tqpair=0x9d1970 00:47:46.806 [2024-07-22 15:01:06.351818] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.806 [2024-07-22 15:01:06.351821] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.806 [2024-07-22 15:01:06.351823] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9d1970) 00:47:46.806 [2024-07-22 15:01:06.351829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:47:46.806 [2024-07-22 15:01:06.351833] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.806 [2024-07-22 15:01:06.351837] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.806 [2024-07-22 15:01:06.351839] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x9d1970) 00:47:46.806 [2024-07-22 15:01:06.351844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:47:46.806 [2024-07-22 15:01:06.351849] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.806 [2024-07-22 15:01:06.351852] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.806 [2024-07-22 15:01:06.351854] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x9d1970) 00:47:46.806 [2024-07-22 15:01:06.351859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:47:46.806 [2024-07-22 15:01:06.351864] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.806 [2024-07-22 15:01:06.351867] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.806 [2024-07-22 15:01:06.351869] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d1970) 00:47:46.806 [2024-07-22 15:01:06.351874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:47:46.806 [2024-07-22 15:01:06.351878] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:47:46.806 [2024-07-22 15:01:06.351884] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:47:46.806 [2024-07-22 15:01:06.351889] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.806 [2024-07-22 15:01:06.351892] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9d1970) 00:47:46.806 [2024-07-22 15:01:06.351898] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.806 [2024-07-22 15:01:06.351915] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa0a1d0, cid 0, qid 0 00:47:46.806 [2024-07-22 15:01:06.351920] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa0a330, cid 1, qid 0 00:47:46.806 [2024-07-22 15:01:06.351924] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa0a490, cid 2, qid 0 00:47:46.806 [2024-07-22 15:01:06.351928] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa0a5f0, cid 3, qid 0 00:47:46.806 [2024-07-22 15:01:06.351931] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa0a750, cid 4, qid 0 00:47:46.806 [2024-07-22 15:01:06.352038] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.806 [2024-07-22 15:01:06.352049] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.806 [2024-07-22 15:01:06.352052] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.806 [2024-07-22 15:01:06.352055] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa0a750) on tqpair=0x9d1970 00:47:46.806 [2024-07-22 15:01:06.352059] nvme_ctrlr.c:2904:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:47:46.806 [2024-07-22 15:01:06.352064] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:47:46.806 [2024-07-22 15:01:06.352070] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:47:46.806 [2024-07-22 15:01:06.352076] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:47:46.806 [2024-07-22 15:01:06.352080] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.806 [2024-07-22 15:01:06.352083] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.806 [2024-07-22 15:01:06.352086] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9d1970) 00:47:46.806 [2024-07-22 15:01:06.352092] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:47:46.806 [2024-07-22 15:01:06.352105] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa0a750, cid 4, qid 0 00:47:46.806 [2024-07-22 15:01:06.352158] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.806 [2024-07-22 15:01:06.352164] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.806 [2024-07-22 15:01:06.352166] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.806 [2024-07-22 15:01:06.352169] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa0a750) on tqpair=0x9d1970 00:47:46.806 [2024-07-22 15:01:06.352230] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:47:46.806 [2024-07-22 15:01:06.352244] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:47:46.806 [2024-07-22 15:01:06.352251] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.806 [2024-07-22 15:01:06.352254] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9d1970) 00:47:46.806 [2024-07-22 15:01:06.352260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.806 [2024-07-22 15:01:06.352274] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa0a750, cid 4, qid 0 00:47:46.806 [2024-07-22 15:01:06.352342] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:47:46.806 [2024-07-22 15:01:06.352351] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:47:46.806 [2024-07-22 15:01:06.352354] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:47:46.806 [2024-07-22 15:01:06.352356] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9d1970): datao=0, datal=4096, cccid=4 00:47:46.806 [2024-07-22 15:01:06.352360] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa0a750) on tqpair(0x9d1970): expected_datao=0, payload_size=4096 00:47:46.806 [2024-07-22 15:01:06.352363] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.806 [2024-07-22 15:01:06.352369] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:47:46.806 [2024-07-22 15:01:06.352372] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:47:46.806 [2024-07-22 15:01:06.352379] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.806 [2024-07-22 15:01:06.352384] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.806 [2024-07-22 15:01:06.352387] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.806 [2024-07-22 15:01:06.352390] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa0a750) on tqpair=0x9d1970 00:47:46.806 [2024-07-22 15:01:06.352402] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:47:46.806 [2024-07-22 15:01:06.352410] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:47:46.806 [2024-07-22 15:01:06.352418] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:47:46.806 [2024-07-22 15:01:06.352424] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.806 [2024-07-22 15:01:06.352427] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9d1970) 00:47:46.806 [2024-07-22 15:01:06.352432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.806 [2024-07-22 15:01:06.352446] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa0a750, cid 4, qid 0 00:47:46.806 [2024-07-22 15:01:06.352510] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:47:46.806 [2024-07-22 15:01:06.352518] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:47:46.806 [2024-07-22 15:01:06.352521] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:47:46.806 [2024-07-22 15:01:06.352524] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9d1970): datao=0, datal=4096, cccid=4 00:47:46.806 [2024-07-22 15:01:06.352527] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa0a750) on tqpair(0x9d1970): expected_datao=0, payload_size=4096 00:47:46.806 [2024-07-22 15:01:06.352530] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.806 [2024-07-22 15:01:06.352536] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:47:46.806 [2024-07-22 15:01:06.352539] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:47:46.806 [2024-07-22 15:01:06.352545] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.806 [2024-07-22 15:01:06.352550] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.806 [2024-07-22 15:01:06.352553] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.806 [2024-07-22 15:01:06.352555] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa0a750) on tqpair=0x9d1970 00:47:46.806 [2024-07-22 15:01:06.352565] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:47:46.807 [2024-07-22 15:01:06.352572] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:47:46.807 [2024-07-22 15:01:06.352577] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.807 [2024-07-22 15:01:06.352580] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9d1970) 00:47:46.807 [2024-07-22 15:01:06.352585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.807 [2024-07-22 15:01:06.352599] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa0a750, cid 4, qid 0 00:47:46.807 [2024-07-22 15:01:06.352678] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:47:46.807 [2024-07-22 15:01:06.352684] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:47:46.807 [2024-07-22 15:01:06.352687] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:47:46.807 [2024-07-22 15:01:06.352689] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9d1970): datao=0, datal=4096, cccid=4 00:47:46.807 [2024-07-22 15:01:06.352692] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa0a750) on tqpair(0x9d1970): expected_datao=0, payload_size=4096 00:47:46.807 [2024-07-22 15:01:06.352695] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.807 [2024-07-22 15:01:06.352701] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:47:46.807 [2024-07-22 15:01:06.352704] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:47:46.807 [2024-07-22 15:01:06.352710] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.807 [2024-07-22 15:01:06.352716] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.807 [2024-07-22 15:01:06.352718] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.807 [2024-07-22 15:01:06.352721] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa0a750) on tqpair=0x9d1970 00:47:46.807 [2024-07-22 15:01:06.352728] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:47:46.807 [2024-07-22 15:01:06.352736] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:47:46.807 [2024-07-22 15:01:06.352746] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:47:46.807 [2024-07-22 15:01:06.352751] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:47:46.807 [2024-07-22 15:01:06.352755] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:47:46.807 [2024-07-22 15:01:06.352760] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:47:46.807 [2024-07-22 15:01:06.352763] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:47:46.807 [2024-07-22 15:01:06.352767] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:47:46.807 [2024-07-22 15:01:06.352784] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.807 [2024-07-22 15:01:06.352787] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9d1970) 00:47:46.807 [2024-07-22 15:01:06.352793] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.807 [2024-07-22 15:01:06.352799] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.807 [2024-07-22 15:01:06.352802] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.807 [2024-07-22 15:01:06.352805] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9d1970) 00:47:46.807 [2024-07-22 15:01:06.352810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:47:46.807 [2024-07-22 15:01:06.352829] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa0a750, cid 4, qid 0 00:47:46.807 [2024-07-22 15:01:06.352834] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa0a8b0, cid 5, qid 0 00:47:46.807 [2024-07-22 15:01:06.352902] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.807 [2024-07-22 15:01:06.352908] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.807 [2024-07-22 15:01:06.352910] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.807 [2024-07-22 15:01:06.352913] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa0a750) on tqpair=0x9d1970 00:47:46.807 [2024-07-22 15:01:06.352919] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.807 [2024-07-22 15:01:06.352923] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.807 [2024-07-22 15:01:06.352926] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.807 [2024-07-22 15:01:06.352929] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa0a8b0) on tqpair=0x9d1970 00:47:46.807 [2024-07-22 15:01:06.352936] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.807 [2024-07-22 15:01:06.352939] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9d1970) 00:47:46.807 [2024-07-22 15:01:06.352944] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.807 [2024-07-22 15:01:06.352957] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa0a8b0, cid 5, qid 0 00:47:46.807 [2024-07-22 15:01:06.353010] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.807 [2024-07-22 15:01:06.353015] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.807 [2024-07-22 15:01:06.353018] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.807 [2024-07-22 15:01:06.353021] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa0a8b0) on tqpair=0x9d1970 00:47:46.807 [2024-07-22 15:01:06.353028] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.807 [2024-07-22 15:01:06.353031] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9d1970) 00:47:46.807 [2024-07-22 15:01:06.353037] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.807 [2024-07-22 15:01:06.353049] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa0a8b0, cid 5, qid 0 00:47:46.807 [2024-07-22 15:01:06.353100] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.807 [2024-07-22 15:01:06.353106] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.807 [2024-07-22 15:01:06.353108] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.807 [2024-07-22 15:01:06.353111] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa0a8b0) on tqpair=0x9d1970 00:47:46.807 [2024-07-22 15:01:06.353118] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.807 [2024-07-22 15:01:06.353121] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9d1970) 00:47:46.807 [2024-07-22 15:01:06.353127] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.807 [2024-07-22 15:01:06.353139] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa0a8b0, cid 5, qid 0 00:47:46.807 [2024-07-22 15:01:06.353187] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.807 [2024-07-22 15:01:06.353193] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.807 [2024-07-22 15:01:06.353195] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.807 [2024-07-22 15:01:06.353198] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa0a8b0) on tqpair=0x9d1970 00:47:46.807 [2024-07-22 15:01:06.353208] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.807 [2024-07-22 15:01:06.353211] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9d1970) 00:47:46.807 [2024-07-22 15:01:06.353217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.807 [2024-07-22 15:01:06.353223] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.807 [2024-07-22 15:01:06.353226] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9d1970) 00:47:46.807 [2024-07-22 15:01:06.353231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.807 [2024-07-22 15:01:06.353237] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.807 [2024-07-22 15:01:06.353239] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x9d1970) 00:47:46.807 [2024-07-22 15:01:06.353244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.807 [2024-07-22 15:01:06.353251] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.807 [2024-07-22 15:01:06.353253] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x9d1970) 00:47:46.807 [2024-07-22 15:01:06.353259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.807 [2024-07-22 15:01:06.353272] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa0a8b0, cid 5, qid 0 00:47:46.807 [2024-07-22 15:01:06.353277] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa0a750, cid 4, qid 0 00:47:46.807 [2024-07-22 15:01:06.353280] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa0aa10, cid 6, qid 0 00:47:46.807 [2024-07-22 15:01:06.353285] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa0ab70, cid 7, qid 0 00:47:46.807 [2024-07-22 15:01:06.353425] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:47:46.807 [2024-07-22 15:01:06.353437] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:47:46.807 [2024-07-22 15:01:06.353440] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:47:46.807 [2024-07-22 15:01:06.353443] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9d1970): datao=0, datal=8192, cccid=5 00:47:46.807 [2024-07-22 15:01:06.353446] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa0a8b0) on tqpair(0x9d1970): expected_datao=0, payload_size=8192 00:47:46.807 [2024-07-22 15:01:06.353449] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.807 [2024-07-22 15:01:06.353463] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:47:46.807 [2024-07-22 15:01:06.353466] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:47:46.807 [2024-07-22 15:01:06.353471] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:47:46.807 [2024-07-22 15:01:06.353475] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:47:46.807 [2024-07-22 15:01:06.353478] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:47:46.807 [2024-07-22 15:01:06.353480] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9d1970): datao=0, datal=512, cccid=4 00:47:46.807 [2024-07-22 15:01:06.353483] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa0a750) on tqpair(0x9d1970): expected_datao=0, payload_size=512 00:47:46.807 [2024-07-22 15:01:06.353486] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.807 [2024-07-22 15:01:06.353492] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:47:46.807 [2024-07-22 15:01:06.353495] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:47:46.807 [2024-07-22 15:01:06.353499] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:47:46.807 [2024-07-22 15:01:06.353504] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:47:46.808 [2024-07-22 15:01:06.353506] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:47:46.808 [2024-07-22 15:01:06.353509] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9d1970): datao=0, datal=512, cccid=6 00:47:46.808 [2024-07-22 15:01:06.353512] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa0aa10) on tqpair(0x9d1970): expected_datao=0, payload_size=512 00:47:46.808 [2024-07-22 15:01:06.353515] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.808 [2024-07-22 15:01:06.353520] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:47:46.808 [2024-07-22 15:01:06.353523] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:47:46.808 [2024-07-22 15:01:06.353528] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:47:46.808 [2024-07-22 15:01:06.353532] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:47:46.808 [2024-07-22 15:01:06.353535] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:47:46.808 [2024-07-22 15:01:06.353538] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9d1970): datao=0, datal=4096, cccid=7 00:47:46.808 [2024-07-22 15:01:06.353541] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa0ab70) on tqpair(0x9d1970): expected_datao=0, payload_size=4096 00:47:46.808 [2024-07-22 15:01:06.353544] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.808 [2024-07-22 15:01:06.353549] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:47:46.808 [2024-07-22 15:01:06.353552] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:47:46.808 [2024-07-22 15:01:06.353558] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.808 [2024-07-22 15:01:06.353563] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.808 [2024-07-22 15:01:06.353566] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.808 [2024-07-22 15:01:06.353569] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa0a8b0) on tqpair=0x9d1970 00:47:46.808 [2024-07-22 15:01:06.353581] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.808 [2024-07-22 15:01:06.353585] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.808 [2024-07-22 15:01:06.353588] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.808 [2024-07-22 15:01:06.353591] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa0a750) on tqpair=0x9d1970 00:47:46.808 [2024-07-22 15:01:06.353599] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.808 [2024-07-22 15:01:06.353604] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.808 [2024-07-22 15:01:06.353607] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.808 [2024-07-22 15:01:06.353609] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa0aa10) on tqpair=0x9d1970 00:47:46.808 [2024-07-22 15:01:06.353619] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.808 [2024-07-22 15:01:06.353625] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.808 [2024-07-22 15:01:06.353627] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.808 [2024-07-22 15:01:06.353630] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa0ab70) on tqpair=0x9d1970 00:47:46.808 ===================================================== 00:47:46.808 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:47:46.808 ===================================================== 00:47:46.808 Controller Capabilities/Features 00:47:46.808 ================================ 00:47:46.808 Vendor ID: 8086 00:47:46.808 Subsystem Vendor ID: 8086 00:47:46.808 Serial Number: SPDK00000000000001 00:47:46.808 Model Number: SPDK bdev Controller 00:47:46.808 Firmware Version: 24.05.1 00:47:46.808 Recommended Arb Burst: 6 00:47:46.808 IEEE OUI Identifier: e4 d2 5c 00:47:46.808 Multi-path I/O 00:47:46.808 May have multiple subsystem ports: Yes 00:47:46.808 May have multiple controllers: Yes 00:47:46.808 Associated with SR-IOV VF: No 00:47:46.808 Max Data Transfer Size: 131072 00:47:46.808 Max Number of Namespaces: 32 00:47:46.808 Max Number of I/O Queues: 127 00:47:46.808 NVMe Specification Version (VS): 1.3 00:47:46.808 NVMe Specification Version (Identify): 1.3 00:47:46.808 Maximum Queue Entries: 128 00:47:46.808 Contiguous Queues Required: Yes 00:47:46.808 Arbitration Mechanisms Supported 00:47:46.808 Weighted Round Robin: Not Supported 00:47:46.808 Vendor Specific: Not Supported 00:47:46.808 Reset Timeout: 15000 ms 00:47:46.808 Doorbell Stride: 4 bytes 00:47:46.808 NVM Subsystem Reset: Not Supported 00:47:46.808 Command Sets Supported 00:47:46.808 NVM Command Set: Supported 00:47:46.808 Boot Partition: Not Supported 00:47:46.808 Memory Page Size Minimum: 4096 bytes 00:47:46.808 Memory Page Size Maximum: 4096 bytes 00:47:46.808 Persistent Memory Region: Not Supported 00:47:46.808 Optional Asynchronous Events Supported 00:47:46.808 Namespace Attribute Notices: Supported 00:47:46.808 Firmware Activation Notices: Not Supported 00:47:46.808 ANA Change Notices: Not Supported 00:47:46.808 PLE Aggregate Log Change Notices: Not Supported 00:47:46.808 LBA Status Info Alert Notices: Not Supported 00:47:46.808 EGE Aggregate Log Change Notices: Not Supported 00:47:46.808 Normal NVM Subsystem Shutdown event: Not Supported 00:47:46.808 Zone Descriptor Change Notices: Not Supported 00:47:46.808 Discovery Log Change Notices: Not Supported 00:47:46.808 Controller Attributes 00:47:46.808 128-bit Host Identifier: Supported 00:47:46.808 Non-Operational Permissive Mode: Not Supported 00:47:46.808 NVM Sets: Not Supported 00:47:46.808 Read Recovery Levels: Not Supported 00:47:46.808 Endurance Groups: Not Supported 00:47:46.808 Predictable Latency Mode: Not Supported 00:47:46.808 Traffic Based Keep ALive: Not Supported 00:47:46.808 Namespace Granularity: Not Supported 00:47:46.808 SQ Associations: Not Supported 00:47:46.808 UUID List: Not Supported 00:47:46.808 Multi-Domain Subsystem: Not Supported 00:47:46.808 Fixed Capacity Management: Not Supported 00:47:46.808 Variable Capacity Management: Not Supported 00:47:46.808 Delete Endurance Group: Not Supported 00:47:46.808 Delete NVM Set: Not Supported 00:47:46.808 Extended LBA Formats Supported: Not Supported 00:47:46.808 Flexible Data Placement Supported: Not Supported 00:47:46.808 00:47:46.808 Controller Memory Buffer Support 00:47:46.808 ================================ 00:47:46.808 Supported: No 00:47:46.808 00:47:46.808 Persistent Memory Region Support 00:47:46.808 ================================ 00:47:46.808 Supported: No 00:47:46.808 00:47:46.808 Admin Command Set Attributes 00:47:46.808 ============================ 00:47:46.808 Security Send/Receive: Not Supported 00:47:46.808 Format NVM: Not Supported 00:47:46.808 Firmware Activate/Download: Not Supported 00:47:46.808 Namespace Management: Not Supported 00:47:46.808 Device Self-Test: Not Supported 00:47:46.808 Directives: Not Supported 00:47:46.808 NVMe-MI: Not Supported 00:47:46.808 Virtualization Management: Not Supported 00:47:46.808 Doorbell Buffer Config: Not Supported 00:47:46.808 Get LBA Status Capability: Not Supported 00:47:46.808 Command & Feature Lockdown Capability: Not Supported 00:47:46.808 Abort Command Limit: 4 00:47:46.808 Async Event Request Limit: 4 00:47:46.808 Number of Firmware Slots: N/A 00:47:46.808 Firmware Slot 1 Read-Only: N/A 00:47:46.808 Firmware Activation Without Reset: N/A 00:47:46.808 Multiple Update Detection Support: N/A 00:47:46.808 Firmware Update Granularity: No Information Provided 00:47:46.808 Per-Namespace SMART Log: No 00:47:46.808 Asymmetric Namespace Access Log Page: Not Supported 00:47:46.808 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:47:46.808 Command Effects Log Page: Supported 00:47:46.808 Get Log Page Extended Data: Supported 00:47:46.808 Telemetry Log Pages: Not Supported 00:47:46.808 Persistent Event Log Pages: Not Supported 00:47:46.808 Supported Log Pages Log Page: May Support 00:47:46.808 Commands Supported & Effects Log Page: Not Supported 00:47:46.808 Feature Identifiers & Effects Log Page:May Support 00:47:46.808 NVMe-MI Commands & Effects Log Page: May Support 00:47:46.808 Data Area 4 for Telemetry Log: Not Supported 00:47:46.808 Error Log Page Entries Supported: 128 00:47:46.808 Keep Alive: Supported 00:47:46.808 Keep Alive Granularity: 10000 ms 00:47:46.808 00:47:46.808 NVM Command Set Attributes 00:47:46.808 ========================== 00:47:46.808 Submission Queue Entry Size 00:47:46.808 Max: 64 00:47:46.808 Min: 64 00:47:46.808 Completion Queue Entry Size 00:47:46.808 Max: 16 00:47:46.808 Min: 16 00:47:46.808 Number of Namespaces: 32 00:47:46.808 Compare Command: Supported 00:47:46.808 Write Uncorrectable Command: Not Supported 00:47:46.808 Dataset Management Command: Supported 00:47:46.808 Write Zeroes Command: Supported 00:47:46.808 Set Features Save Field: Not Supported 00:47:46.808 Reservations: Supported 00:47:46.808 Timestamp: Not Supported 00:47:46.808 Copy: Supported 00:47:46.808 Volatile Write Cache: Present 00:47:46.808 Atomic Write Unit (Normal): 1 00:47:46.808 Atomic Write Unit (PFail): 1 00:47:46.808 Atomic Compare & Write Unit: 1 00:47:46.808 Fused Compare & Write: Supported 00:47:46.808 Scatter-Gather List 00:47:46.808 SGL Command Set: Supported 00:47:46.808 SGL Keyed: Supported 00:47:46.808 SGL Bit Bucket Descriptor: Not Supported 00:47:46.808 SGL Metadata Pointer: Not Supported 00:47:46.808 Oversized SGL: Not Supported 00:47:46.808 SGL Metadata Address: Not Supported 00:47:46.808 SGL Offset: Supported 00:47:46.809 Transport SGL Data Block: Not Supported 00:47:46.809 Replay Protected Memory Block: Not Supported 00:47:46.809 00:47:46.809 Firmware Slot Information 00:47:46.809 ========================= 00:47:46.809 Active slot: 1 00:47:46.809 Slot 1 Firmware Revision: 24.05.1 00:47:46.809 00:47:46.809 00:47:46.809 Commands Supported and Effects 00:47:46.809 ============================== 00:47:46.809 Admin Commands 00:47:46.809 -------------- 00:47:46.809 Get Log Page (02h): Supported 00:47:46.809 Identify (06h): Supported 00:47:46.809 Abort (08h): Supported 00:47:46.809 Set Features (09h): Supported 00:47:46.809 Get Features (0Ah): Supported 00:47:46.809 Asynchronous Event Request (0Ch): Supported 00:47:46.809 Keep Alive (18h): Supported 00:47:46.809 I/O Commands 00:47:46.809 ------------ 00:47:46.809 Flush (00h): Supported LBA-Change 00:47:46.809 Write (01h): Supported LBA-Change 00:47:46.809 Read (02h): Supported 00:47:46.809 Compare (05h): Supported 00:47:46.809 Write Zeroes (08h): Supported LBA-Change 00:47:46.809 Dataset Management (09h): Supported LBA-Change 00:47:46.809 Copy (19h): Supported LBA-Change 00:47:46.809 Unknown (79h): Supported LBA-Change 00:47:46.809 Unknown (7Ah): Supported 00:47:46.809 00:47:46.809 Error Log 00:47:46.809 ========= 00:47:46.809 00:47:46.809 Arbitration 00:47:46.809 =========== 00:47:46.809 Arbitration Burst: 1 00:47:46.809 00:47:46.809 Power Management 00:47:46.809 ================ 00:47:46.809 Number of Power States: 1 00:47:46.809 Current Power State: Power State #0 00:47:46.809 Power State #0: 00:47:46.809 Max Power: 0.00 W 00:47:46.809 Non-Operational State: Operational 00:47:46.809 Entry Latency: Not Reported 00:47:46.809 Exit Latency: Not Reported 00:47:46.809 Relative Read Throughput: 0 00:47:46.809 Relative Read Latency: 0 00:47:46.809 Relative Write Throughput: 0 00:47:46.809 Relative Write Latency: 0 00:47:46.809 Idle Power: Not Reported 00:47:46.809 Active Power: Not Reported 00:47:46.809 Non-Operational Permissive Mode: Not Supported 00:47:46.809 00:47:46.809 Health Information 00:47:46.809 ================== 00:47:46.809 Critical Warnings: 00:47:46.809 Available Spare Space: OK 00:47:46.809 Temperature: OK 00:47:46.809 Device Reliability: OK 00:47:46.809 Read Only: No 00:47:46.809 Volatile Memory Backup: OK 00:47:46.809 Current Temperature: 0 Kelvin (-273 Celsius) 00:47:46.809 Temperature Threshold: [2024-07-22 15:01:06.353737] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.809 [2024-07-22 15:01:06.353742] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x9d1970) 00:47:46.809 [2024-07-22 15:01:06.353748] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.809 [2024-07-22 15:01:06.353764] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa0ab70, cid 7, qid 0 00:47:46.809 [2024-07-22 15:01:06.353837] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.809 [2024-07-22 15:01:06.353846] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.809 [2024-07-22 15:01:06.353849] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.809 [2024-07-22 15:01:06.353852] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa0ab70) on tqpair=0x9d1970 00:47:46.809 [2024-07-22 15:01:06.353892] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:47:46.809 [2024-07-22 15:01:06.353906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:46.809 [2024-07-22 15:01:06.353914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:46.809 [2024-07-22 15:01:06.353920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:46.809 [2024-07-22 15:01:06.353924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:46.809 [2024-07-22 15:01:06.353932] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.809 [2024-07-22 15:01:06.353935] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.809 [2024-07-22 15:01:06.353938] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d1970) 00:47:46.809 [2024-07-22 15:01:06.353944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.809 [2024-07-22 15:01:06.353960] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa0a5f0, cid 3, qid 0 00:47:46.809 [2024-07-22 15:01:06.354013] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.809 [2024-07-22 15:01:06.354018] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.809 [2024-07-22 15:01:06.354021] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.809 [2024-07-22 15:01:06.354024] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa0a5f0) on tqpair=0x9d1970 00:47:46.809 [2024-07-22 15:01:06.354029] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.809 [2024-07-22 15:01:06.354033] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.809 [2024-07-22 15:01:06.354035] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d1970) 00:47:46.809 [2024-07-22 15:01:06.354041] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.809 [2024-07-22 15:01:06.354056] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa0a5f0, cid 3, qid 0 00:47:46.809 [2024-07-22 15:01:06.354125] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.809 [2024-07-22 15:01:06.354132] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.809 [2024-07-22 15:01:06.354134] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.809 [2024-07-22 15:01:06.354137] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa0a5f0) on tqpair=0x9d1970 00:47:46.809 [2024-07-22 15:01:06.354141] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:47:46.809 [2024-07-22 15:01:06.354145] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:47:46.809 [2024-07-22 15:01:06.354152] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.809 [2024-07-22 15:01:06.354155] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.809 [2024-07-22 15:01:06.354158] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d1970) 00:47:46.809 [2024-07-22 15:01:06.354163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.809 [2024-07-22 15:01:06.354176] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa0a5f0, cid 3, qid 0 00:47:46.809 [2024-07-22 15:01:06.354227] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.809 [2024-07-22 15:01:06.354233] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.809 [2024-07-22 15:01:06.354236] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.809 [2024-07-22 15:01:06.354238] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa0a5f0) on tqpair=0x9d1970 00:47:46.809 [2024-07-22 15:01:06.354246] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.809 [2024-07-22 15:01:06.354249] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.809 [2024-07-22 15:01:06.354252] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d1970) 00:47:46.809 [2024-07-22 15:01:06.354258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.809 [2024-07-22 15:01:06.354270] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa0a5f0, cid 3, qid 0 00:47:46.809 [2024-07-22 15:01:06.354317] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.810 [2024-07-22 15:01:06.354322] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.810 [2024-07-22 15:01:06.354325] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.810 [2024-07-22 15:01:06.354328] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa0a5f0) on tqpair=0x9d1970 00:47:46.810 [2024-07-22 15:01:06.354336] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.810 [2024-07-22 15:01:06.354338] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.810 [2024-07-22 15:01:06.354341] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d1970) 00:47:46.810 [2024-07-22 15:01:06.354347] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.810 [2024-07-22 15:01:06.354359] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa0a5f0, cid 3, qid 0 00:47:46.810 [2024-07-22 15:01:06.354407] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.810 [2024-07-22 15:01:06.354413] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.810 [2024-07-22 15:01:06.354415] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.810 [2024-07-22 15:01:06.354418] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa0a5f0) on tqpair=0x9d1970 00:47:46.810 [2024-07-22 15:01:06.354426] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.810 [2024-07-22 15:01:06.354429] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.810 [2024-07-22 15:01:06.354432] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d1970) 00:47:46.810 [2024-07-22 15:01:06.354437] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.810 [2024-07-22 15:01:06.354451] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa0a5f0, cid 3, qid 0 00:47:46.810 [2024-07-22 15:01:06.354499] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.810 [2024-07-22 15:01:06.354505] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.810 [2024-07-22 15:01:06.354507] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.810 [2024-07-22 15:01:06.354510] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa0a5f0) on tqpair=0x9d1970 00:47:46.810 [2024-07-22 15:01:06.354518] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.810 [2024-07-22 15:01:06.354521] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.810 [2024-07-22 15:01:06.354523] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d1970) 00:47:46.810 [2024-07-22 15:01:06.354529] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.810 [2024-07-22 15:01:06.354541] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa0a5f0, cid 3, qid 0 00:47:46.810 [2024-07-22 15:01:06.354594] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.810 [2024-07-22 15:01:06.354599] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.810 [2024-07-22 15:01:06.354601] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.810 [2024-07-22 15:01:06.354604] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa0a5f0) on tqpair=0x9d1970 00:47:46.810 [2024-07-22 15:01:06.354612] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.810 [2024-07-22 15:01:06.354615] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.810 [2024-07-22 15:01:06.354618] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d1970) 00:47:46.810 [2024-07-22 15:01:06.354623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.810 [2024-07-22 15:01:06.354636] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa0a5f0, cid 3, qid 0 00:47:46.810 [2024-07-22 15:01:06.358689] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.810 [2024-07-22 15:01:06.358704] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.810 [2024-07-22 15:01:06.358707] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.810 [2024-07-22 15:01:06.358709] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa0a5f0) on tqpair=0x9d1970 00:47:46.810 [2024-07-22 15:01:06.358719] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:47:46.810 [2024-07-22 15:01:06.358722] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:47:46.810 [2024-07-22 15:01:06.358724] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d1970) 00:47:46.810 [2024-07-22 15:01:06.358729] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:47:46.810 [2024-07-22 15:01:06.358747] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa0a5f0, cid 3, qid 0 00:47:46.810 [2024-07-22 15:01:06.358796] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:47:46.810 [2024-07-22 15:01:06.358803] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:47:46.810 [2024-07-22 15:01:06.358806] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:47:46.810 [2024-07-22 15:01:06.358808] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa0a5f0) on tqpair=0x9d1970 00:47:46.810 [2024-07-22 15:01:06.358814] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:47:46.810 0 Kelvin (-273 Celsius) 00:47:46.810 Available Spare: 0% 00:47:46.810 Available Spare Threshold: 0% 00:47:46.810 Life Percentage Used: 0% 00:47:46.810 Data Units Read: 0 00:47:46.810 Data Units Written: 0 00:47:46.810 Host Read Commands: 0 00:47:46.810 Host Write Commands: 0 00:47:46.810 Controller Busy Time: 0 minutes 00:47:46.810 Power Cycles: 0 00:47:46.810 Power On Hours: 0 hours 00:47:46.810 Unsafe Shutdowns: 0 00:47:46.810 Unrecoverable Media Errors: 0 00:47:46.810 Lifetime Error Log Entries: 0 00:47:46.810 Warning Temperature Time: 0 minutes 00:47:46.810 Critical Temperature Time: 0 minutes 00:47:46.810 00:47:46.810 Number of Queues 00:47:46.810 ================ 00:47:46.810 Number of I/O Submission Queues: 127 00:47:46.810 Number of I/O Completion Queues: 127 00:47:46.810 00:47:46.810 Active Namespaces 00:47:46.810 ================= 00:47:46.810 Namespace ID:1 00:47:46.810 Error Recovery Timeout: Unlimited 00:47:46.810 Command Set Identifier: NVM (00h) 00:47:46.810 Deallocate: Supported 00:47:46.810 Deallocated/Unwritten Error: Not Supported 00:47:46.810 Deallocated Read Value: Unknown 00:47:46.810 Deallocate in Write Zeroes: Not Supported 00:47:46.810 Deallocated Guard Field: 0xFFFF 00:47:46.810 Flush: Supported 00:47:46.810 Reservation: Supported 00:47:46.810 Namespace Sharing Capabilities: Multiple Controllers 00:47:46.810 Size (in LBAs): 131072 (0GiB) 00:47:46.810 Capacity (in LBAs): 131072 (0GiB) 00:47:46.810 Utilization (in LBAs): 131072 (0GiB) 00:47:46.810 NGUID: ABCDEF0123456789ABCDEF0123456789 00:47:46.810 EUI64: ABCDEF0123456789 00:47:46.810 UUID: 72d57f37-b4c6-4402-88b5-0e99555d8e32 00:47:46.810 Thin Provisioning: Not Supported 00:47:46.810 Per-NS Atomic Units: Yes 00:47:46.810 Atomic Boundary Size (Normal): 0 00:47:46.810 Atomic Boundary Size (PFail): 0 00:47:46.810 Atomic Boundary Offset: 0 00:47:46.810 Maximum Single Source Range Length: 65535 00:47:46.810 Maximum Copy Length: 65535 00:47:46.810 Maximum Source Range Count: 1 00:47:46.810 NGUID/EUI64 Never Reused: No 00:47:46.810 Namespace Write Protected: No 00:47:46.810 Number of LBA Formats: 1 00:47:46.810 Current LBA Format: LBA Format #00 00:47:46.810 LBA Format #00: Data Size: 512 Metadata Size: 0 00:47:46.810 00:47:46.810 15:01:06 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:47:47.070 15:01:06 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:47:47.071 15:01:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:47.071 15:01:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:47:47.071 15:01:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:47.071 15:01:06 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:47:47.071 15:01:06 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:47:47.071 15:01:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:47:47.071 15:01:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:47:47.071 15:01:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:47:47.071 15:01:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:47:47.071 15:01:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:47:47.071 15:01:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:47:47.071 rmmod nvme_tcp 00:47:47.071 rmmod nvme_fabrics 00:47:47.071 rmmod nvme_keyring 00:47:47.071 15:01:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:47:47.071 15:01:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:47:47.071 15:01:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:47:47.071 15:01:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 103822 ']' 00:47:47.071 15:01:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 103822 00:47:47.071 15:01:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 103822 ']' 00:47:47.071 15:01:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 103822 00:47:47.071 15:01:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:47:47.071 15:01:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:47:47.071 15:01:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 103822 00:47:47.071 killing process with pid 103822 00:47:47.071 15:01:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:47:47.071 15:01:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:47:47.071 15:01:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 103822' 00:47:47.071 15:01:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # kill 103822 00:47:47.071 15:01:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@970 -- # wait 103822 00:47:47.330 15:01:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:47:47.330 15:01:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:47:47.330 15:01:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:47:47.330 15:01:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:47:47.330 15:01:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:47:47.330 15:01:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:47.330 15:01:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:47:47.330 15:01:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:47.330 15:01:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:47:47.330 00:47:47.330 real 0m2.457s 00:47:47.330 user 0m6.798s 00:47:47.330 sys 0m0.674s 00:47:47.330 15:01:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:47:47.330 15:01:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:47:47.330 ************************************ 00:47:47.330 END TEST nvmf_identify 00:47:47.330 ************************************ 00:47:47.330 15:01:06 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:47:47.330 15:01:06 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:47:47.330 15:01:06 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:47:47.330 15:01:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:47:47.330 ************************************ 00:47:47.330 START TEST nvmf_perf 00:47:47.330 ************************************ 00:47:47.330 15:01:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:47:47.590 * Looking for test storage... 00:47:47.590 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:47:47.590 15:01:06 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:47:47.590 15:01:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:47:47.590 Cannot find device "nvmf_tgt_br" 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:47:47.590 Cannot find device "nvmf_tgt_br2" 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:47:47.590 Cannot find device "nvmf_tgt_br" 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:47:47.590 Cannot find device "nvmf_tgt_br2" 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:47:47.590 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:47:47.850 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:47:47.850 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:47:47.850 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:47:47.850 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:47:47.850 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:47:47.850 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:47:47.850 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:47:47.850 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:47:47.850 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:47:47.850 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:47:47.850 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:47:47.850 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:47:47.850 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:47:47.850 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:47:47.850 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:47:47.850 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:47:47.850 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:47:47.850 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:47:47.850 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:47:47.850 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:47:47.850 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:47:47.850 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:47:47.850 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:47:47.850 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:47:47.850 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:47:47.850 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:47:47.850 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:47:47.850 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:47:47.850 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:47:47.850 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:47:47.850 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:47:47.850 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:47:47.850 00:47:47.850 --- 10.0.0.2 ping statistics --- 00:47:47.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:47.850 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:47:47.850 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:47:47.850 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:47:47.850 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.096 ms 00:47:47.850 00:47:47.850 --- 10.0.0.3 ping statistics --- 00:47:47.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:47.850 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:47:47.850 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:47:47.850 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:47:47.850 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:47:47.850 00:47:47.850 --- 10.0.0.1 ping statistics --- 00:47:47.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:47:47.850 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:47:47.850 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:47:47.850 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:47:47.850 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:47:47.850 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:47:47.850 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:47:47.850 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:47:47.850 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:47:47.850 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:47:47.850 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:47:47.850 15:01:07 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:47:47.850 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:47:47.850 15:01:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:47:47.850 15:01:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:47:47.850 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=104046 00:47:47.850 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:47:47.850 15:01:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 104046 00:47:47.850 15:01:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 104046 ']' 00:47:47.850 15:01:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:47.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:47.850 15:01:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:47:47.850 15:01:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:47.850 15:01:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:47:47.850 15:01:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:47:48.110 [2024-07-22 15:01:07.526438] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:47:48.110 [2024-07-22 15:01:07.526509] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:48.110 [2024-07-22 15:01:07.668260] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:47:48.110 [2024-07-22 15:01:07.722081] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:47:48.110 [2024-07-22 15:01:07.722221] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:47:48.110 [2024-07-22 15:01:07.722284] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:47:48.110 [2024-07-22 15:01:07.722328] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:47:48.110 [2024-07-22 15:01:07.722347] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:47:48.110 [2024-07-22 15:01:07.722645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:47:48.110 [2024-07-22 15:01:07.722765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:47:48.110 [2024-07-22 15:01:07.722887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:47:48.110 [2024-07-22 15:01:07.722888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:47:49.046 15:01:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:47:49.047 15:01:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:47:49.047 15:01:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:47:49.047 15:01:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:47:49.047 15:01:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:47:49.047 15:01:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:47:49.047 15:01:08 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:47:49.047 15:01:08 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:47:49.304 15:01:08 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:47:49.304 15:01:08 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:47:49.563 15:01:09 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:47:49.563 15:01:09 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:47:49.821 15:01:09 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:47:49.821 15:01:09 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:47:49.821 15:01:09 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:47:49.821 15:01:09 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:47:49.821 15:01:09 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:47:50.079 [2024-07-22 15:01:09.545483] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:50.079 15:01:09 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:47:50.338 15:01:09 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:47:50.338 15:01:09 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:47:50.597 15:01:10 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:47:50.597 15:01:10 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:47:50.856 15:01:10 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:47:50.856 [2024-07-22 15:01:10.465773] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:47:51.114 15:01:10 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:47:51.115 15:01:10 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:47:51.115 15:01:10 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:47:51.115 15:01:10 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:47:51.115 15:01:10 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:47:52.491 Initializing NVMe Controllers 00:47:52.491 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:47:52.491 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:47:52.491 Initialization complete. Launching workers. 00:47:52.491 ======================================================== 00:47:52.491 Latency(us) 00:47:52.491 Device Information : IOPS MiB/s Average min max 00:47:52.491 PCIE (0000:00:10.0) NSID 1 from core 0: 24544.00 95.88 1308.75 317.08 7907.87 00:47:52.491 ======================================================== 00:47:52.492 Total : 24544.00 95.88 1308.75 317.08 7907.87 00:47:52.492 00:47:52.492 15:01:11 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:47:53.442 Initializing NVMe Controllers 00:47:53.442 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:47:53.442 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:47:53.442 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:47:53.442 Initialization complete. Launching workers. 00:47:53.442 ======================================================== 00:47:53.442 Latency(us) 00:47:53.442 Device Information : IOPS MiB/s Average min max 00:47:53.442 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5464.38 21.35 182.79 71.95 4163.87 00:47:53.443 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.76 0.48 8144.22 7986.77 12024.53 00:47:53.443 ======================================================== 00:47:53.443 Total : 5588.14 21.83 359.11 71.95 12024.53 00:47:53.443 00:47:53.702 15:01:13 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:47:55.082 Initializing NVMe Controllers 00:47:55.082 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:47:55.082 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:47:55.082 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:47:55.082 Initialization complete. Launching workers. 00:47:55.082 ======================================================== 00:47:55.082 Latency(us) 00:47:55.082 Device Information : IOPS MiB/s Average min max 00:47:55.082 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10129.40 39.57 3160.08 474.09 16305.61 00:47:55.082 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2669.58 10.43 12090.92 5934.61 25250.06 00:47:55.082 ======================================================== 00:47:55.082 Total : 12798.98 50.00 5022.85 474.09 25250.06 00:47:55.082 00:47:55.082 15:01:14 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:47:55.082 15:01:14 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:47:57.618 Initializing NVMe Controllers 00:47:57.618 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:47:57.618 Controller IO queue size 128, less than required. 00:47:57.618 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:47:57.618 Controller IO queue size 128, less than required. 00:47:57.618 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:47:57.618 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:47:57.618 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:47:57.618 Initialization complete. Launching workers. 00:47:57.618 ======================================================== 00:47:57.618 Latency(us) 00:47:57.618 Device Information : IOPS MiB/s Average min max 00:47:57.618 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2160.94 540.24 60356.00 41572.29 104829.20 00:47:57.618 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 513.30 128.32 254330.40 140541.40 437162.59 00:47:57.618 ======================================================== 00:47:57.618 Total : 2674.24 668.56 97587.80 41572.29 437162.59 00:47:57.618 00:47:57.618 15:01:16 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:47:57.618 Initializing NVMe Controllers 00:47:57.618 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:47:57.618 Controller IO queue size 128, less than required. 00:47:57.618 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:47:57.618 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:47:57.618 Controller IO queue size 128, less than required. 00:47:57.618 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:47:57.618 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:47:57.618 WARNING: Some requested NVMe devices were skipped 00:47:57.618 No valid NVMe controllers or AIO or URING devices found 00:47:57.618 15:01:17 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:48:00.161 Initializing NVMe Controllers 00:48:00.161 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:48:00.161 Controller IO queue size 128, less than required. 00:48:00.161 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:48:00.161 Controller IO queue size 128, less than required. 00:48:00.161 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:48:00.161 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:48:00.161 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:48:00.161 Initialization complete. Launching workers. 00:48:00.161 00:48:00.161 ==================== 00:48:00.161 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:48:00.161 TCP transport: 00:48:00.161 polls: 15397 00:48:00.161 idle_polls: 11110 00:48:00.161 sock_completions: 4287 00:48:00.161 nvme_completions: 6067 00:48:00.161 submitted_requests: 9132 00:48:00.161 queued_requests: 1 00:48:00.161 00:48:00.161 ==================== 00:48:00.161 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:48:00.161 TCP transport: 00:48:00.161 polls: 9627 00:48:00.161 idle_polls: 5865 00:48:00.161 sock_completions: 3762 00:48:00.161 nvme_completions: 7339 00:48:00.161 submitted_requests: 11030 00:48:00.161 queued_requests: 1 00:48:00.161 ======================================================== 00:48:00.161 Latency(us) 00:48:00.161 Device Information : IOPS MiB/s Average min max 00:48:00.161 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1513.50 378.37 86419.44 55025.52 151082.15 00:48:00.161 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1830.87 457.72 69946.57 34935.42 132476.95 00:48:00.161 ======================================================== 00:48:00.161 Total : 3344.37 836.09 77401.39 34935.42 151082.15 00:48:00.161 00:48:00.161 15:01:19 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:48:00.421 15:01:19 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:48:00.421 15:01:20 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:48:00.421 15:01:20 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:48:00.421 15:01:20 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:48:00.682 15:01:20 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=ea202bfb-7e40-4e09-a04a-3629b3003d67 00:48:00.682 15:01:20 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb ea202bfb-7e40-4e09-a04a-3629b3003d67 00:48:00.682 15:01:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=ea202bfb-7e40-4e09-a04a-3629b3003d67 00:48:00.682 15:01:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:48:00.682 15:01:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:48:00.682 15:01:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:48:00.682 15:01:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:48:00.944 15:01:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:48:00.944 { 00:48:00.944 "base_bdev": "Nvme0n1", 00:48:00.944 "block_size": 4096, 00:48:00.944 "cluster_size": 4194304, 00:48:00.944 "free_clusters": 1278, 00:48:00.944 "name": "lvs_0", 00:48:00.944 "total_data_clusters": 1278, 00:48:00.944 "uuid": "ea202bfb-7e40-4e09-a04a-3629b3003d67" 00:48:00.944 } 00:48:00.944 ]' 00:48:00.944 15:01:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="ea202bfb-7e40-4e09-a04a-3629b3003d67") .free_clusters' 00:48:00.944 15:01:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=1278 00:48:00.944 15:01:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="ea202bfb-7e40-4e09-a04a-3629b3003d67") .cluster_size' 00:48:00.944 15:01:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:48:00.944 15:01:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=5112 00:48:00.944 15:01:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 5112 00:48:00.944 5112 00:48:00.944 15:01:20 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:48:00.944 15:01:20 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ea202bfb-7e40-4e09-a04a-3629b3003d67 lbd_0 5112 00:48:01.203 15:01:20 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=3cd6714e-79d9-4d9a-b1bb-1e9cb3477bff 00:48:01.203 15:01:20 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 3cd6714e-79d9-4d9a-b1bb-1e9cb3477bff lvs_n_0 00:48:01.462 15:01:21 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=2363065a-9227-4b3a-b888-ab589a23052f 00:48:01.462 15:01:21 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 2363065a-9227-4b3a-b888-ab589a23052f 00:48:01.462 15:01:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=2363065a-9227-4b3a-b888-ab589a23052f 00:48:01.462 15:01:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:48:01.462 15:01:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:48:01.462 15:01:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:48:01.462 15:01:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:48:01.721 15:01:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:48:01.721 { 00:48:01.721 "base_bdev": "Nvme0n1", 00:48:01.721 "block_size": 4096, 00:48:01.721 "cluster_size": 4194304, 00:48:01.721 "free_clusters": 0, 00:48:01.721 "name": "lvs_0", 00:48:01.721 "total_data_clusters": 1278, 00:48:01.721 "uuid": "ea202bfb-7e40-4e09-a04a-3629b3003d67" 00:48:01.721 }, 00:48:01.721 { 00:48:01.721 "base_bdev": "3cd6714e-79d9-4d9a-b1bb-1e9cb3477bff", 00:48:01.721 "block_size": 4096, 00:48:01.721 "cluster_size": 4194304, 00:48:01.721 "free_clusters": 1276, 00:48:01.721 "name": "lvs_n_0", 00:48:01.721 "total_data_clusters": 1276, 00:48:01.721 "uuid": "2363065a-9227-4b3a-b888-ab589a23052f" 00:48:01.721 } 00:48:01.721 ]' 00:48:01.721 15:01:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="2363065a-9227-4b3a-b888-ab589a23052f") .free_clusters' 00:48:01.721 15:01:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=1276 00:48:01.721 15:01:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="2363065a-9227-4b3a-b888-ab589a23052f") .cluster_size' 00:48:01.721 15:01:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:48:01.721 15:01:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=5104 00:48:01.721 15:01:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 5104 00:48:01.721 5104 00:48:01.721 15:01:21 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:48:01.721 15:01:21 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2363065a-9227-4b3a-b888-ab589a23052f lbd_nest_0 5104 00:48:01.980 15:01:21 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=5dfb23dc-0c97-4cee-92c4-39345538a5ad 00:48:01.980 15:01:21 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:48:02.239 15:01:21 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:48:02.239 15:01:21 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 5dfb23dc-0c97-4cee-92c4-39345538a5ad 00:48:02.497 15:01:21 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:48:02.755 15:01:22 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:48:02.755 15:01:22 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:48:02.755 15:01:22 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:48:02.755 15:01:22 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:48:02.755 15:01:22 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:48:03.012 Initializing NVMe Controllers 00:48:03.012 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:48:03.012 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:48:03.012 WARNING: Some requested NVMe devices were skipped 00:48:03.012 No valid NVMe controllers or AIO or URING devices found 00:48:03.012 15:01:22 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:48:03.012 15:01:22 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:48:15.242 Initializing NVMe Controllers 00:48:15.242 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:48:15.242 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:48:15.242 Initialization complete. Launching workers. 00:48:15.242 ======================================================== 00:48:15.242 Latency(us) 00:48:15.242 Device Information : IOPS MiB/s Average min max 00:48:15.242 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1031.46 128.93 968.88 284.22 7993.55 00:48:15.242 ======================================================== 00:48:15.242 Total : 1031.46 128.93 968.88 284.22 7993.55 00:48:15.242 00:48:15.242 15:01:32 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:48:15.242 15:01:32 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:48:15.242 15:01:32 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:48:15.242 Initializing NVMe Controllers 00:48:15.242 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:48:15.242 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:48:15.242 WARNING: Some requested NVMe devices were skipped 00:48:15.242 No valid NVMe controllers or AIO or URING devices found 00:48:15.242 15:01:33 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:48:15.242 15:01:33 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:48:25.224 Initializing NVMe Controllers 00:48:25.224 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:48:25.224 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:48:25.224 Initialization complete. Launching workers. 00:48:25.224 ======================================================== 00:48:25.224 Latency(us) 00:48:25.224 Device Information : IOPS MiB/s Average min max 00:48:25.224 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1085.90 135.74 29525.24 7851.51 245632.52 00:48:25.224 ======================================================== 00:48:25.224 Total : 1085.90 135.74 29525.24 7851.51 245632.52 00:48:25.224 00:48:25.224 15:01:43 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:48:25.224 15:01:43 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:48:25.224 15:01:43 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:48:25.224 Initializing NVMe Controllers 00:48:25.224 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:48:25.224 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:48:25.224 WARNING: Some requested NVMe devices were skipped 00:48:25.224 No valid NVMe controllers or AIO or URING devices found 00:48:25.224 15:01:43 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:48:25.224 15:01:43 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:48:35.205 Initializing NVMe Controllers 00:48:35.205 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:48:35.205 Controller IO queue size 128, less than required. 00:48:35.205 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:48:35.205 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:48:35.205 Initialization complete. Launching workers. 00:48:35.205 ======================================================== 00:48:35.205 Latency(us) 00:48:35.205 Device Information : IOPS MiB/s Average min max 00:48:35.205 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4652.78 581.60 27533.86 8149.51 69549.89 00:48:35.205 ======================================================== 00:48:35.205 Total : 4652.78 581.60 27533.86 8149.51 69549.89 00:48:35.205 00:48:35.205 15:01:53 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:48:35.205 15:01:54 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 5dfb23dc-0c97-4cee-92c4-39345538a5ad 00:48:35.205 15:01:54 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:48:35.205 15:01:54 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 3cd6714e-79d9-4d9a-b1bb-1e9cb3477bff 00:48:35.205 15:01:54 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:48:35.464 15:01:55 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:48:35.464 15:01:55 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:48:35.464 15:01:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:48:35.464 15:01:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:48:35.464 15:01:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:48:35.464 15:01:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:48:35.464 15:01:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:48:35.464 15:01:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:48:35.464 rmmod nvme_tcp 00:48:35.464 rmmod nvme_fabrics 00:48:35.464 rmmod nvme_keyring 00:48:35.464 15:01:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:48:35.724 15:01:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:48:35.724 15:01:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:48:35.724 15:01:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 104046 ']' 00:48:35.724 15:01:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 104046 00:48:35.724 15:01:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 104046 ']' 00:48:35.724 15:01:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 104046 00:48:35.724 15:01:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:48:35.724 15:01:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:48:35.724 15:01:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 104046 00:48:35.724 15:01:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:48:35.724 15:01:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:48:35.724 15:01:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 104046' 00:48:35.724 killing process with pid 104046 00:48:35.724 15:01:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # kill 104046 00:48:35.724 15:01:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@970 -- # wait 104046 00:48:37.628 15:01:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:48:37.628 15:01:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:48:37.628 15:01:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:48:37.628 15:01:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:48:37.628 15:01:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:48:37.628 15:01:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:48:37.628 15:01:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:48:37.628 15:01:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:48:37.888 15:01:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:48:37.888 00:48:37.888 real 0m50.389s 00:48:37.888 user 3m9.174s 00:48:37.888 sys 0m9.507s 00:48:37.888 15:01:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:48:37.888 15:01:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:48:37.888 ************************************ 00:48:37.888 END TEST nvmf_perf 00:48:37.888 ************************************ 00:48:37.888 15:01:57 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:48:37.888 15:01:57 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:48:37.888 15:01:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:48:37.888 15:01:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:48:37.888 ************************************ 00:48:37.888 START TEST nvmf_fio_host 00:48:37.888 ************************************ 00:48:37.888 15:01:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:48:37.888 * Looking for test storage... 00:48:37.888 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:48:37.888 15:01:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:48:37.888 15:01:57 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:48:37.888 15:01:57 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:48:37.888 15:01:57 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:48:37.888 15:01:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:37.888 15:01:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:37.888 15:01:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:37.888 15:01:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:48:37.888 15:01:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:37.888 15:01:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:48:37.888 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:48:37.888 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:48:37.888 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:48:37.888 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:48:37.888 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:48:37.888 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:48:37.888 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:48:37.888 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:48:37.888 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:48:37.888 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:48:37.888 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:48:37.888 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:48:37.888 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:48:37.888 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:48:37.888 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:48:37.888 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:48:37.888 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:48:37.888 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:48:37.888 15:01:57 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:48:37.888 15:01:57 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:48:37.888 15:01:57 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:48:37.888 15:01:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:37.888 15:01:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:37.888 15:01:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:37.888 15:01:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:48:37.888 15:01:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:37.888 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:48:37.888 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:48:37.888 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:48:37.888 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:48:37.888 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:48:37.888 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:48:37.888 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:48:37.889 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:48:37.889 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:48:37.889 15:01:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:48:37.889 15:01:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:48:37.889 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:48:37.889 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:48:37.889 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:48:37.889 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:48:37.889 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:48:37.889 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:48:37.889 15:01:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:48:37.889 15:01:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:48:37.889 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:48:37.889 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:48:37.889 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:48:37.889 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:48:37.889 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:48:37.889 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:48:37.889 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:48:37.889 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:48:37.889 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:48:37.889 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:48:37.889 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:48:37.889 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:48:37.889 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:48:37.889 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:48:37.889 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:48:37.889 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:48:37.889 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:48:37.889 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:48:37.889 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:48:38.148 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:48:38.148 Cannot find device "nvmf_tgt_br" 00:48:38.148 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:48:38.148 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:48:38.148 Cannot find device "nvmf_tgt_br2" 00:48:38.148 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:48:38.148 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:48:38.148 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:48:38.148 Cannot find device "nvmf_tgt_br" 00:48:38.148 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:48:38.148 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:48:38.148 Cannot find device "nvmf_tgt_br2" 00:48:38.148 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:48:38.148 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:48:38.148 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:48:38.148 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:48:38.148 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:48:38.148 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:48:38.148 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:48:38.148 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:48:38.148 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:48:38.148 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:48:38.148 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:48:38.148 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:48:38.148 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:48:38.148 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:48:38.148 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:48:38.148 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:48:38.148 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:48:38.148 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:48:38.148 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:48:38.148 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:48:38.148 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:48:38.148 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:48:38.148 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:48:38.148 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:48:38.148 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:48:38.148 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:48:38.148 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:48:38.148 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:48:38.149 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:48:38.149 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:48:38.408 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:48:38.408 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:48:38.408 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:48:38.408 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:48:38.408 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:48:38.408 00:48:38.408 --- 10.0.0.2 ping statistics --- 00:48:38.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:38.408 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:48:38.408 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:48:38.408 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:48:38.408 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:48:38.408 00:48:38.408 --- 10.0.0.3 ping statistics --- 00:48:38.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:38.408 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:48:38.408 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:48:38.408 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:48:38.408 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:48:38.408 00:48:38.408 --- 10.0.0.1 ping statistics --- 00:48:38.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:38.408 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:48:38.408 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:48:38.408 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:48:38.408 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:48:38.408 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:48:38.408 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:48:38.408 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:48:38.408 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:48:38.408 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:48:38.408 15:01:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:48:38.408 15:01:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:48:38.408 15:01:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:48:38.408 15:01:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:48:38.408 15:01:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:48:38.408 15:01:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=105015 00:48:38.408 15:01:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:48:38.408 15:01:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:48:38.408 15:01:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 105015 00:48:38.408 15:01:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 105015 ']' 00:48:38.408 15:01:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:38.408 15:01:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:48:38.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:38.408 15:01:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:38.408 15:01:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:48:38.408 15:01:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:48:38.408 [2024-07-22 15:01:57.897804] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:48:38.408 [2024-07-22 15:01:57.897867] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:48:38.408 [2024-07-22 15:01:58.025102] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:48:38.669 [2024-07-22 15:01:58.077542] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:48:38.669 [2024-07-22 15:01:58.077594] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:48:38.669 [2024-07-22 15:01:58.077605] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:48:38.669 [2024-07-22 15:01:58.077614] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:48:38.669 [2024-07-22 15:01:58.077621] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:48:38.669 [2024-07-22 15:01:58.077742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:48:38.669 [2024-07-22 15:01:58.077787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:48:38.669 [2024-07-22 15:01:58.078935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:48:38.669 [2024-07-22 15:01:58.078936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:48:39.236 15:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:48:39.236 15:01:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:48:39.236 15:01:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:48:39.495 [2024-07-22 15:01:59.001356] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:48:39.495 15:01:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:48:39.495 15:01:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:48:39.495 15:01:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:48:39.495 15:01:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:48:39.753 Malloc1 00:48:39.753 15:01:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:48:40.012 15:01:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:48:40.271 15:01:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:48:40.531 [2024-07-22 15:02:00.016947] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:48:40.531 15:02:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:48:40.789 15:02:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:48:40.789 15:02:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:48:40.789 15:02:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:48:40.789 15:02:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:48:40.789 15:02:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:48:40.789 15:02:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:48:40.789 15:02:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:48:40.789 15:02:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:48:40.789 15:02:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:48:40.789 15:02:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:48:40.789 15:02:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:48:40.789 15:02:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:48:40.790 15:02:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:48:40.790 15:02:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:48:40.790 15:02:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:48:40.790 15:02:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:48:40.790 15:02:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:48:40.790 15:02:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:48:40.790 15:02:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:48:40.790 15:02:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:48:40.790 15:02:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:48:40.790 15:02:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:48:40.790 15:02:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:48:41.047 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:48:41.047 fio-3.35 00:48:41.047 Starting 1 thread 00:48:43.578 00:48:43.578 test: (groupid=0, jobs=1): err= 0: pid=105143: Mon Jul 22 15:02:02 2024 00:48:43.578 read: IOPS=9877, BW=38.6MiB/s (40.5MB/s)(77.4MiB/2006msec) 00:48:43.578 slat (nsec): min=1509, max=457260, avg=2118.86, stdev=4498.61 00:48:43.578 clat (usec): min=4551, max=15295, avg=6778.88, stdev=746.30 00:48:43.578 lat (usec): min=4554, max=15309, avg=6781.00, stdev=746.82 00:48:43.578 clat percentiles (usec): 00:48:43.578 | 1.00th=[ 5276], 5.00th=[ 5669], 10.00th=[ 5866], 20.00th=[ 6259], 00:48:43.578 | 30.00th=[ 6521], 40.00th=[ 6718], 50.00th=[ 6849], 60.00th=[ 6915], 00:48:43.578 | 70.00th=[ 7046], 80.00th=[ 7177], 90.00th=[ 7439], 95.00th=[ 7635], 00:48:43.578 | 99.00th=[ 8979], 99.50th=[10683], 99.90th=[15008], 99.95th=[15139], 00:48:43.578 | 99.99th=[15270] 00:48:43.578 bw ( KiB/s): min=37328, max=43160, per=100.00%, avg=39510.00, stdev=2551.76, samples=4 00:48:43.578 iops : min= 9332, max=10790, avg=9877.50, stdev=637.94, samples=4 00:48:43.578 write: IOPS=9893, BW=38.6MiB/s (40.5MB/s)(77.5MiB/2006msec); 0 zone resets 00:48:43.578 slat (nsec): min=1544, max=472057, avg=2187.65, stdev=3543.25 00:48:43.578 clat (usec): min=3779, max=11772, avg=6133.23, stdev=604.38 00:48:43.578 lat (usec): min=3798, max=11774, avg=6135.42, stdev=604.78 00:48:43.578 clat percentiles (usec): 00:48:43.578 | 1.00th=[ 4752], 5.00th=[ 5080], 10.00th=[ 5342], 20.00th=[ 5669], 00:48:43.578 | 30.00th=[ 5932], 40.00th=[ 6063], 50.00th=[ 6194], 60.00th=[ 6325], 00:48:43.578 | 70.00th=[ 6390], 80.00th=[ 6521], 90.00th=[ 6718], 95.00th=[ 6849], 00:48:43.578 | 99.00th=[ 7308], 99.50th=[ 8979], 99.90th=[10814], 99.95th=[11076], 00:48:43.578 | 99.99th=[11731] 00:48:43.578 bw ( KiB/s): min=38040, max=43240, per=99.96%, avg=39558.00, stdev=2465.08, samples=4 00:48:43.578 iops : min= 9510, max=10810, avg=9889.50, stdev=616.27, samples=4 00:48:43.578 lat (msec) : 4=0.02%, 10=99.48%, 20=0.50% 00:48:43.578 cpu : usr=74.81%, sys=18.55%, ctx=7, majf=0, minf=5 00:48:43.578 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:48:43.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:43.578 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:48:43.578 issued rwts: total=19814,19847,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:43.578 latency : target=0, window=0, percentile=100.00%, depth=128 00:48:43.578 00:48:43.578 Run status group 0 (all jobs): 00:48:43.578 READ: bw=38.6MiB/s (40.5MB/s), 38.6MiB/s-38.6MiB/s (40.5MB/s-40.5MB/s), io=77.4MiB (81.2MB), run=2006-2006msec 00:48:43.578 WRITE: bw=38.6MiB/s (40.5MB/s), 38.6MiB/s-38.6MiB/s (40.5MB/s-40.5MB/s), io=77.5MiB (81.3MB), run=2006-2006msec 00:48:43.578 15:02:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:48:43.578 15:02:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:48:43.578 15:02:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:48:43.578 15:02:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:48:43.578 15:02:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:48:43.578 15:02:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:48:43.578 15:02:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:48:43.578 15:02:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:48:43.578 15:02:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:48:43.578 15:02:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:48:43.578 15:02:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:48:43.578 15:02:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:48:43.578 15:02:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:48:43.578 15:02:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:48:43.578 15:02:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:48:43.578 15:02:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:48:43.578 15:02:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:48:43.578 15:02:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:48:43.578 15:02:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:48:43.578 15:02:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:48:43.578 15:02:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:48:43.578 15:02:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:48:43.578 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:48:43.578 fio-3.35 00:48:43.578 Starting 1 thread 00:48:46.109 00:48:46.109 test: (groupid=0, jobs=1): err= 0: pid=105186: Mon Jul 22 15:02:05 2024 00:48:46.109 read: IOPS=8967, BW=140MiB/s (147MB/s)(281MiB/2007msec) 00:48:46.109 slat (usec): min=3, max=112, avg= 3.48, stdev= 2.00 00:48:46.109 clat (usec): min=2157, max=20687, avg=8414.09, stdev=2176.40 00:48:46.109 lat (usec): min=2160, max=20690, avg=8417.57, stdev=2176.77 00:48:46.109 clat percentiles (usec): 00:48:46.109 | 1.00th=[ 4293], 5.00th=[ 5145], 10.00th=[ 5735], 20.00th=[ 6587], 00:48:46.109 | 30.00th=[ 7111], 40.00th=[ 7701], 50.00th=[ 8356], 60.00th=[ 9110], 00:48:46.109 | 70.00th=[ 9634], 80.00th=[10028], 90.00th=[10814], 95.00th=[11731], 00:48:46.109 | 99.00th=[14353], 99.50th=[17695], 99.90th=[19792], 99.95th=[20317], 00:48:46.109 | 99.99th=[20579] 00:48:46.109 bw ( KiB/s): min=66720, max=76960, per=50.35%, avg=72232.00, stdev=4384.22, samples=4 00:48:46.109 iops : min= 4170, max= 4810, avg=4514.50, stdev=274.01, samples=4 00:48:46.109 write: IOPS=5321, BW=83.1MiB/s (87.2MB/s)(147MiB/1767msec); 0 zone resets 00:48:46.109 slat (usec): min=33, max=547, avg=37.23, stdev= 6.38 00:48:46.109 clat (usec): min=3972, max=20892, avg=10375.90, stdev=2101.76 00:48:46.109 lat (usec): min=4008, max=20934, avg=10413.13, stdev=2102.62 00:48:46.109 clat percentiles (usec): 00:48:46.109 | 1.00th=[ 6718], 5.00th=[ 7701], 10.00th=[ 8094], 20.00th=[ 8717], 00:48:46.109 | 30.00th=[ 9241], 40.00th=[ 9634], 50.00th=[10028], 60.00th=[10552], 00:48:46.109 | 70.00th=[11076], 80.00th=[11731], 90.00th=[13042], 95.00th=[14091], 00:48:46.109 | 99.00th=[17957], 99.50th=[20055], 99.90th=[20841], 99.95th=[20841], 00:48:46.109 | 99.99th=[20841] 00:48:46.109 bw ( KiB/s): min=70016, max=79872, per=88.35%, avg=75224.00, stdev=4255.03, samples=4 00:48:46.109 iops : min= 4376, max= 4992, avg=4701.50, stdev=265.94, samples=4 00:48:46.109 lat (msec) : 4=0.38%, 10=68.02%, 20=31.38%, 50=0.22% 00:48:46.109 cpu : usr=77.87%, sys=14.56%, ctx=4, majf=0, minf=2 00:48:46.109 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:48:46.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:46.109 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:48:46.109 issued rwts: total=17997,9403,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:46.109 latency : target=0, window=0, percentile=100.00%, depth=128 00:48:46.109 00:48:46.109 Run status group 0 (all jobs): 00:48:46.109 READ: bw=140MiB/s (147MB/s), 140MiB/s-140MiB/s (147MB/s-147MB/s), io=281MiB (295MB), run=2007-2007msec 00:48:46.109 WRITE: bw=83.1MiB/s (87.2MB/s), 83.1MiB/s-83.1MiB/s (87.2MB/s-87.2MB/s), io=147MiB (154MB), run=1767-1767msec 00:48:46.109 15:02:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:48:46.109 15:02:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:48:46.109 15:02:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:48:46.109 15:02:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:48:46.109 15:02:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # bdfs=() 00:48:46.109 15:02:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # local bdfs 00:48:46.109 15:02:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:48:46.109 15:02:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:48:46.109 15:02:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:48:46.109 15:02:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1511 -- # (( 2 == 0 )) 00:48:46.109 15:02:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:48:46.109 15:02:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.2 00:48:46.368 Nvme0n1 00:48:46.368 15:02:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:48:46.628 15:02:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=0a37d2bf-ec9f-48fa-af62-1aa85ef4547f 00:48:46.628 15:02:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 0a37d2bf-ec9f-48fa-af62-1aa85ef4547f 00:48:46.628 15:02:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=0a37d2bf-ec9f-48fa-af62-1aa85ef4547f 00:48:46.628 15:02:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:48:46.628 15:02:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:48:46.628 15:02:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:48:46.628 15:02:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:48:46.628 15:02:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:48:46.628 { 00:48:46.628 "base_bdev": "Nvme0n1", 00:48:46.628 "block_size": 4096, 00:48:46.628 "cluster_size": 1073741824, 00:48:46.628 "free_clusters": 4, 00:48:46.628 "name": "lvs_0", 00:48:46.628 "total_data_clusters": 4, 00:48:46.628 "uuid": "0a37d2bf-ec9f-48fa-af62-1aa85ef4547f" 00:48:46.628 } 00:48:46.628 ]' 00:48:46.628 15:02:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="0a37d2bf-ec9f-48fa-af62-1aa85ef4547f") .free_clusters' 00:48:46.886 15:02:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=4 00:48:46.886 15:02:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="0a37d2bf-ec9f-48fa-af62-1aa85ef4547f") .cluster_size' 00:48:46.886 15:02:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=1073741824 00:48:46.886 15:02:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=4096 00:48:46.886 15:02:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 4096 00:48:46.886 4096 00:48:46.886 15:02:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:48:46.886 509be3ed-a872-4a37-975c-c6d37ab60cda 00:48:46.886 15:02:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:48:47.144 15:02:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:48:47.403 15:02:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:48:47.662 15:02:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:48:47.662 15:02:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:48:47.662 15:02:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:48:47.662 15:02:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:48:47.662 15:02:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:48:47.662 15:02:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:48:47.662 15:02:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:48:47.662 15:02:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:48:47.662 15:02:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:48:47.662 15:02:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:48:47.662 15:02:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:48:47.662 15:02:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:48:47.662 15:02:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:48:47.662 15:02:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:48:47.662 15:02:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:48:47.662 15:02:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:48:47.662 15:02:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:48:47.662 15:02:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:48:47.662 15:02:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:48:47.662 15:02:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:48:47.662 15:02:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:48:47.662 15:02:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:48:47.921 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:48:47.921 fio-3.35 00:48:47.921 Starting 1 thread 00:48:50.451 00:48:50.451 test: (groupid=0, jobs=1): err= 0: pid=105343: Mon Jul 22 15:02:09 2024 00:48:50.451 read: IOPS=7070, BW=27.6MiB/s (29.0MB/s)(55.4MiB/2007msec) 00:48:50.451 slat (nsec): min=1536, max=362974, avg=2190.83, stdev=3968.95 00:48:50.451 clat (usec): min=3675, max=16615, avg=9478.77, stdev=805.45 00:48:50.451 lat (usec): min=3689, max=16617, avg=9480.96, stdev=805.18 00:48:50.451 clat percentiles (usec): 00:48:50.451 | 1.00th=[ 7767], 5.00th=[ 8356], 10.00th=[ 8586], 20.00th=[ 8848], 00:48:50.451 | 30.00th=[ 9110], 40.00th=[ 9241], 50.00th=[ 9503], 60.00th=[ 9634], 00:48:50.451 | 70.00th=[ 9896], 80.00th=[10028], 90.00th=[10421], 95.00th=[10683], 00:48:50.451 | 99.00th=[11600], 99.50th=[11994], 99.90th=[14877], 99.95th=[15926], 00:48:50.451 | 99.99th=[16581] 00:48:50.451 bw ( KiB/s): min=27000, max=29008, per=99.86%, avg=28242.00, stdev=878.70, samples=4 00:48:50.451 iops : min= 6750, max= 7252, avg=7060.50, stdev=219.67, samples=4 00:48:50.451 write: IOPS=7068, BW=27.6MiB/s (29.0MB/s)(55.4MiB/2007msec); 0 zone resets 00:48:50.451 slat (nsec): min=1590, max=279459, avg=2241.97, stdev=2739.87 00:48:50.451 clat (usec): min=2664, max=16634, avg=8531.71, stdev=730.32 00:48:50.451 lat (usec): min=2678, max=16636, avg=8533.95, stdev=730.13 00:48:50.452 clat percentiles (usec): 00:48:50.452 | 1.00th=[ 6915], 5.00th=[ 7439], 10.00th=[ 7701], 20.00th=[ 7963], 00:48:50.452 | 30.00th=[ 8160], 40.00th=[ 8356], 50.00th=[ 8586], 60.00th=[ 8717], 00:48:50.452 | 70.00th=[ 8848], 80.00th=[ 9110], 90.00th=[ 9372], 95.00th=[ 9634], 00:48:50.452 | 99.00th=[10159], 99.50th=[10552], 99.90th=[13304], 99.95th=[14746], 00:48:50.452 | 99.99th=[16057] 00:48:50.452 bw ( KiB/s): min=27904, max=28672, per=99.95%, avg=28260.00, stdev=352.03, samples=4 00:48:50.452 iops : min= 6976, max= 7168, avg=7065.00, stdev=88.01, samples=4 00:48:50.452 lat (msec) : 4=0.06%, 10=87.81%, 20=12.12% 00:48:50.452 cpu : usr=76.77%, sys=18.39%, ctx=21, majf=0, minf=5 00:48:50.452 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:48:50.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:50.452 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:48:50.452 issued rwts: total=14191,14187,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:50.452 latency : target=0, window=0, percentile=100.00%, depth=128 00:48:50.452 00:48:50.452 Run status group 0 (all jobs): 00:48:50.452 READ: bw=27.6MiB/s (29.0MB/s), 27.6MiB/s-27.6MiB/s (29.0MB/s-29.0MB/s), io=55.4MiB (58.1MB), run=2007-2007msec 00:48:50.452 WRITE: bw=27.6MiB/s (29.0MB/s), 27.6MiB/s-27.6MiB/s (29.0MB/s-29.0MB/s), io=55.4MiB (58.1MB), run=2007-2007msec 00:48:50.452 15:02:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:48:50.452 15:02:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:48:50.452 15:02:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=2ed1fce8-5cfe-44a6-9e72-278823c5a21a 00:48:50.452 15:02:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 2ed1fce8-5cfe-44a6-9e72-278823c5a21a 00:48:50.452 15:02:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=2ed1fce8-5cfe-44a6-9e72-278823c5a21a 00:48:50.452 15:02:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:48:50.452 15:02:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:48:50.452 15:02:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:48:50.452 15:02:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:48:50.710 15:02:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:48:50.710 { 00:48:50.710 "base_bdev": "Nvme0n1", 00:48:50.710 "block_size": 4096, 00:48:50.710 "cluster_size": 1073741824, 00:48:50.710 "free_clusters": 0, 00:48:50.710 "name": "lvs_0", 00:48:50.710 "total_data_clusters": 4, 00:48:50.710 "uuid": "0a37d2bf-ec9f-48fa-af62-1aa85ef4547f" 00:48:50.710 }, 00:48:50.710 { 00:48:50.710 "base_bdev": "509be3ed-a872-4a37-975c-c6d37ab60cda", 00:48:50.710 "block_size": 4096, 00:48:50.710 "cluster_size": 4194304, 00:48:50.710 "free_clusters": 1022, 00:48:50.710 "name": "lvs_n_0", 00:48:50.710 "total_data_clusters": 1022, 00:48:50.710 "uuid": "2ed1fce8-5cfe-44a6-9e72-278823c5a21a" 00:48:50.710 } 00:48:50.710 ]' 00:48:50.710 15:02:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="2ed1fce8-5cfe-44a6-9e72-278823c5a21a") .free_clusters' 00:48:50.711 15:02:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=1022 00:48:50.711 15:02:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="2ed1fce8-5cfe-44a6-9e72-278823c5a21a") .cluster_size' 00:48:50.711 15:02:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=4194304 00:48:50.711 15:02:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=4088 00:48:50.711 4088 00:48:50.711 15:02:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 4088 00:48:50.711 15:02:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:48:50.970 c4ce6793-7574-4a25-ab01-5127707c6118 00:48:50.970 15:02:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:48:51.230 15:02:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:48:51.230 15:02:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:48:51.490 15:02:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:48:51.490 15:02:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:48:51.490 15:02:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:48:51.490 15:02:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:48:51.490 15:02:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:48:51.490 15:02:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:48:51.490 15:02:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:48:51.490 15:02:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:48:51.490 15:02:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:48:51.490 15:02:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:48:51.490 15:02:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:48:51.490 15:02:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:48:51.490 15:02:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:48:51.490 15:02:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:48:51.490 15:02:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:48:51.490 15:02:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:48:51.490 15:02:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:48:51.490 15:02:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:48:51.748 15:02:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:48:51.748 15:02:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:48:51.748 15:02:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:48:51.748 15:02:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:48:51.748 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:48:51.748 fio-3.35 00:48:51.748 Starting 1 thread 00:48:54.285 00:48:54.285 test: (groupid=0, jobs=1): err= 0: pid=105458: Mon Jul 22 15:02:13 2024 00:48:54.285 read: IOPS=6310, BW=24.7MiB/s (25.8MB/s)(49.6MiB/2010msec) 00:48:54.285 slat (nsec): min=1560, max=432525, avg=2215.45, stdev=4992.78 00:48:54.285 clat (usec): min=4322, max=17602, avg=10627.86, stdev=843.13 00:48:54.285 lat (usec): min=4336, max=17615, avg=10630.07, stdev=842.77 00:48:54.285 clat percentiles (usec): 00:48:54.285 | 1.00th=[ 8848], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[10028], 00:48:54.285 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10552], 60.00th=[10814], 00:48:54.285 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11600], 95.00th=[11863], 00:48:54.285 | 99.00th=[12518], 99.50th=[12911], 99.90th=[16581], 99.95th=[16909], 00:48:54.285 | 99.99th=[17695] 00:48:54.285 bw ( KiB/s): min=24440, max=25664, per=100.00%, avg=25248.00, stdev=553.68, samples=4 00:48:54.285 iops : min= 6110, max= 6416, avg=6312.00, stdev=138.42, samples=4 00:48:54.285 write: IOPS=6305, BW=24.6MiB/s (25.8MB/s)(49.5MiB/2010msec); 0 zone resets 00:48:54.285 slat (nsec): min=1588, max=301171, avg=2278.85, stdev=3253.49 00:48:54.285 clat (usec): min=3278, max=17723, avg=9574.37, stdev=817.79 00:48:54.285 lat (usec): min=3296, max=17725, avg=9576.65, stdev=817.51 00:48:54.285 clat percentiles (usec): 00:48:54.285 | 1.00th=[ 7832], 5.00th=[ 8455], 10.00th=[ 8717], 20.00th=[ 8979], 00:48:54.285 | 30.00th=[ 9241], 40.00th=[ 9372], 50.00th=[ 9634], 60.00th=[ 9765], 00:48:54.285 | 70.00th=[ 9896], 80.00th=[10159], 90.00th=[10421], 95.00th=[10683], 00:48:54.285 | 99.00th=[11338], 99.50th=[11600], 99.90th=[16581], 99.95th=[16909], 00:48:54.285 | 99.99th=[17433] 00:48:54.285 bw ( KiB/s): min=25024, max=25544, per=99.98%, avg=25216.00, stdev=247.96, samples=4 00:48:54.285 iops : min= 6256, max= 6386, avg=6304.00, stdev=61.99, samples=4 00:48:54.285 lat (msec) : 4=0.02%, 10=46.59%, 20=53.38% 00:48:54.285 cpu : usr=78.15%, sys=17.57%, ctx=4, majf=0, minf=5 00:48:54.285 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:48:54.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:54.285 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:48:54.285 issued rwts: total=12685,12674,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:54.285 latency : target=0, window=0, percentile=100.00%, depth=128 00:48:54.285 00:48:54.285 Run status group 0 (all jobs): 00:48:54.285 READ: bw=24.7MiB/s (25.8MB/s), 24.7MiB/s-24.7MiB/s (25.8MB/s-25.8MB/s), io=49.6MiB (52.0MB), run=2010-2010msec 00:48:54.285 WRITE: bw=24.6MiB/s (25.8MB/s), 24.6MiB/s-24.6MiB/s (25.8MB/s-25.8MB/s), io=49.5MiB (51.9MB), run=2010-2010msec 00:48:54.285 15:02:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:48:54.285 15:02:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:48:54.285 15:02:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:48:54.544 15:02:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:48:54.804 15:02:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:48:54.804 15:02:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:48:55.064 15:02:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:48:57.605 15:02:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:48:57.605 15:02:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:48:57.605 15:02:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:48:57.605 15:02:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:48:57.605 15:02:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:48:57.605 15:02:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:48:57.605 15:02:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:48:57.605 15:02:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:48:57.605 15:02:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:48:57.605 rmmod nvme_tcp 00:48:57.605 rmmod nvme_fabrics 00:48:57.605 rmmod nvme_keyring 00:48:57.605 15:02:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:48:57.605 15:02:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:48:57.605 15:02:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:48:57.605 15:02:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 105015 ']' 00:48:57.605 15:02:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 105015 00:48:57.605 15:02:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 105015 ']' 00:48:57.605 15:02:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 105015 00:48:57.605 15:02:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:48:57.605 15:02:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:48:57.605 15:02:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 105015 00:48:57.605 15:02:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:48:57.605 15:02:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:48:57.605 killing process with pid 105015 00:48:57.605 15:02:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 105015' 00:48:57.605 15:02:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 105015 00:48:57.605 15:02:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 105015 00:48:57.605 15:02:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:48:57.605 15:02:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:48:57.605 15:02:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:48:57.605 15:02:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:48:57.605 15:02:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:48:57.605 15:02:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:48:57.605 15:02:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:48:57.605 15:02:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:48:57.605 15:02:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:48:57.605 00:48:57.605 real 0m19.771s 00:48:57.605 user 1m24.836s 00:48:57.605 sys 0m3.927s 00:48:57.605 15:02:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:48:57.605 15:02:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:48:57.605 ************************************ 00:48:57.605 END TEST nvmf_fio_host 00:48:57.605 ************************************ 00:48:57.605 15:02:17 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:48:57.605 15:02:17 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:48:57.605 15:02:17 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:48:57.605 15:02:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:48:57.605 ************************************ 00:48:57.605 START TEST nvmf_failover 00:48:57.605 ************************************ 00:48:57.605 15:02:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:48:57.866 * Looking for test storage... 00:48:57.866 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:48:57.866 15:02:17 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:48:57.866 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:48:57.866 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:48:57.866 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:48:57.866 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:48:57.866 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:48:57.866 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:48:57.866 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:48:57.866 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:48:57.866 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:48:57.866 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:48:57.866 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:48:57.866 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:48:57.866 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:48:57.866 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:48:57.866 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:48:57.866 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:48:57.866 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:48:57.866 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:48:57.866 15:02:17 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:48:57.866 15:02:17 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:48:57.866 15:02:17 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:48:57.867 Cannot find device "nvmf_tgt_br" 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:48:57.867 Cannot find device "nvmf_tgt_br2" 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:48:57.867 Cannot find device "nvmf_tgt_br" 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:48:57.867 Cannot find device "nvmf_tgt_br2" 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:48:57.867 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:48:58.127 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:48:58.127 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:48:58.127 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:48:58.127 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:48:58.127 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:48:58.127 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:48:58.127 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:48:58.127 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:48:58.127 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:48:58.127 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:48:58.127 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:48:58.127 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:48:58.127 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:48:58.127 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:48:58.127 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:48:58.127 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:48:58.127 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:48:58.127 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:48:58.127 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:48:58.127 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:48:58.127 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:48:58.127 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:48:58.127 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:48:58.127 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:48:58.127 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:48:58.127 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:48:58.127 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:48:58.127 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:48:58.127 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:48:58.127 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:48:58.127 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:48:58.127 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:48:58.127 00:48:58.127 --- 10.0.0.2 ping statistics --- 00:48:58.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:58.127 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:48:58.128 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:48:58.128 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:48:58.128 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:48:58.128 00:48:58.128 --- 10.0.0.3 ping statistics --- 00:48:58.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:58.128 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:48:58.128 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:48:58.128 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:48:58.128 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:48:58.128 00:48:58.128 --- 10.0.0.1 ping statistics --- 00:48:58.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:48:58.128 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:48:58.128 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:48:58.128 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:48:58.128 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:48:58.128 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:48:58.128 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:48:58.128 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:48:58.128 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:48:58.128 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:48:58.128 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:48:58.128 15:02:17 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:48:58.128 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:48:58.128 15:02:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:48:58.128 15:02:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:48:58.128 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=105745 00:48:58.128 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:48:58.128 15:02:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 105745 00:48:58.128 15:02:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 105745 ']' 00:48:58.128 15:02:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:48:58.128 15:02:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:48:58.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:48:58.128 15:02:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:48:58.128 15:02:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:48:58.128 15:02:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:48:58.388 [2024-07-22 15:02:17.793722] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:48:58.388 [2024-07-22 15:02:17.793794] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:48:58.388 [2024-07-22 15:02:17.940739] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:48:58.648 [2024-07-22 15:02:18.026130] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:48:58.648 [2024-07-22 15:02:18.026184] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:48:58.648 [2024-07-22 15:02:18.026191] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:48:58.648 [2024-07-22 15:02:18.026197] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:48:58.648 [2024-07-22 15:02:18.026201] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:48:58.648 [2024-07-22 15:02:18.026418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:48:58.648 [2024-07-22 15:02:18.026532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:48:58.648 [2024-07-22 15:02:18.026538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:48:59.221 15:02:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:48:59.221 15:02:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:48:59.221 15:02:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:48:59.221 15:02:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:48:59.221 15:02:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:48:59.221 15:02:18 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:48:59.221 15:02:18 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:48:59.483 [2024-07-22 15:02:18.914117] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:48:59.483 15:02:18 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:48:59.742 Malloc0 00:48:59.742 15:02:19 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:48:59.742 15:02:19 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:49:00.002 15:02:19 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:49:00.262 [2024-07-22 15:02:19.731529] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:49:00.262 15:02:19 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:49:00.521 [2024-07-22 15:02:19.935217] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:49:00.521 15:02:19 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:49:00.521 [2024-07-22 15:02:20.130992] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:49:00.781 15:02:20 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:49:00.781 15:02:20 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=105851 00:49:00.781 15:02:20 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:49:00.782 15:02:20 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 105851 /var/tmp/bdevperf.sock 00:49:00.782 15:02:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 105851 ']' 00:49:00.782 15:02:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:49:00.782 15:02:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:49:00.782 15:02:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:49:00.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:49:00.782 15:02:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:49:00.782 15:02:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:49:01.719 15:02:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:49:01.719 15:02:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:49:01.719 15:02:21 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:49:01.719 NVMe0n1 00:49:01.719 15:02:21 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:49:01.979 00:49:01.979 15:02:21 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:49:01.979 15:02:21 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=105894 00:49:01.979 15:02:21 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:49:03.363 15:02:22 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:49:03.363 [2024-07-22 15:02:22.773271] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30eb0 is same with the state(5) to be set 00:49:03.363 [2024-07-22 15:02:22.773323] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30eb0 is same with the state(5) to be set 00:49:03.363 [2024-07-22 15:02:22.773331] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30eb0 is same with the state(5) to be set 00:49:03.363 [2024-07-22 15:02:22.773337] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30eb0 is same with the state(5) to be set 00:49:03.363 [2024-07-22 15:02:22.773342] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30eb0 is same with the state(5) to be set 00:49:03.363 [2024-07-22 15:02:22.773347] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30eb0 is same with the state(5) to be set 00:49:03.363 [2024-07-22 15:02:22.773352] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30eb0 is same with the state(5) to be set 00:49:03.363 [2024-07-22 15:02:22.773356] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30eb0 is same with the state(5) to be set 00:49:03.363 15:02:22 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:49:06.656 15:02:25 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:49:06.656 00:49:06.656 15:02:26 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:49:06.656 [2024-07-22 15:02:26.228120] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa316e0 is same with the state(5) to be set 00:49:06.656 [2024-07-22 15:02:26.228152] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa316e0 is same with the state(5) to be set 00:49:06.656 [2024-07-22 15:02:26.228158] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa316e0 is same with the state(5) to be set 00:49:06.656 [2024-07-22 15:02:26.228163] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa316e0 is same with the state(5) to be set 00:49:06.656 [2024-07-22 15:02:26.228168] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa316e0 is same with the state(5) to be set 00:49:06.656 [2024-07-22 15:02:26.228173] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa316e0 is same with the state(5) to be set 00:49:06.656 [2024-07-22 15:02:26.228177] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa316e0 is same with the state(5) to be set 00:49:06.656 [2024-07-22 15:02:26.228182] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa316e0 is same with the state(5) to be set 00:49:06.656 [2024-07-22 15:02:26.228186] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa316e0 is same with the state(5) to be set 00:49:06.656 [2024-07-22 15:02:26.228191] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa316e0 is same with the state(5) to be set 00:49:06.656 [2024-07-22 15:02:26.228195] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa316e0 is same with the state(5) to be set 00:49:06.656 [2024-07-22 15:02:26.228199] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa316e0 is same with the state(5) to be set 00:49:06.656 [2024-07-22 15:02:26.228204] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa316e0 is same with the state(5) to be set 00:49:06.656 [2024-07-22 15:02:26.228209] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa316e0 is same with the state(5) to be set 00:49:06.656 [2024-07-22 15:02:26.228214] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa316e0 is same with the state(5) to be set 00:49:06.656 [2024-07-22 15:02:26.228219] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa316e0 is same with the state(5) to be set 00:49:06.656 [2024-07-22 15:02:26.228223] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa316e0 is same with the state(5) to be set 00:49:06.656 [2024-07-22 15:02:26.228228] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa316e0 is same with the state(5) to be set 00:49:06.656 [2024-07-22 15:02:26.228232] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa316e0 is same with the state(5) to be set 00:49:06.656 [2024-07-22 15:02:26.228236] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa316e0 is same with the state(5) to be set 00:49:06.656 [2024-07-22 15:02:26.228240] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa316e0 is same with the state(5) to be set 00:49:06.656 [2024-07-22 15:02:26.228244] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa316e0 is same with the state(5) to be set 00:49:06.656 [2024-07-22 15:02:26.228249] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa316e0 is same with the state(5) to be set 00:49:06.656 [2024-07-22 15:02:26.228253] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa316e0 is same with the state(5) to be set 00:49:06.656 [2024-07-22 15:02:26.228257] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa316e0 is same with the state(5) to be set 00:49:06.656 [2024-07-22 15:02:26.228262] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa316e0 is same with the state(5) to be set 00:49:06.656 [2024-07-22 15:02:26.228266] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa316e0 is same with the state(5) to be set 00:49:06.657 [2024-07-22 15:02:26.228270] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa316e0 is same with the state(5) to be set 00:49:06.657 [2024-07-22 15:02:26.228276] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa316e0 is same with the state(5) to be set 00:49:06.657 [2024-07-22 15:02:26.228280] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa316e0 is same with the state(5) to be set 00:49:06.657 [2024-07-22 15:02:26.228285] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa316e0 is same with the state(5) to be set 00:49:06.657 [2024-07-22 15:02:26.228289] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa316e0 is same with the state(5) to be set 00:49:06.657 [2024-07-22 15:02:26.228302] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa316e0 is same with the state(5) to be set 00:49:06.657 [2024-07-22 15:02:26.228308] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa316e0 is same with the state(5) to be set 00:49:06.657 [2024-07-22 15:02:26.228312] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa316e0 is same with the state(5) to be set 00:49:06.657 [2024-07-22 15:02:26.228317] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa316e0 is same with the state(5) to be set 00:49:06.657 [2024-07-22 15:02:26.228321] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa316e0 is same with the state(5) to be set 00:49:06.657 [2024-07-22 15:02:26.228326] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa316e0 is same with the state(5) to be set 00:49:06.657 15:02:26 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:49:09.950 15:02:29 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:49:09.950 [2024-07-22 15:02:29.427770] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:49:09.950 15:02:29 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:49:10.887 15:02:30 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:49:11.147 [2024-07-22 15:02:30.617949] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.147 [2024-07-22 15:02:30.617992] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.147 [2024-07-22 15:02:30.617998] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.147 [2024-07-22 15:02:30.618004] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.147 [2024-07-22 15:02:30.618010] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.147 [2024-07-22 15:02:30.618015] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.147 [2024-07-22 15:02:30.618019] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.147 [2024-07-22 15:02:30.618024] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.147 [2024-07-22 15:02:30.618029] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.147 [2024-07-22 15:02:30.618035] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.147 [2024-07-22 15:02:30.618041] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.147 [2024-07-22 15:02:30.618045] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.147 [2024-07-22 15:02:30.618050] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.147 [2024-07-22 15:02:30.618055] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.147 [2024-07-22 15:02:30.618060] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.147 [2024-07-22 15:02:30.618064] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.147 [2024-07-22 15:02:30.618069] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.148 [2024-07-22 15:02:30.618074] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.148 [2024-07-22 15:02:30.618079] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.148 [2024-07-22 15:02:30.618084] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.148 [2024-07-22 15:02:30.618088] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.148 [2024-07-22 15:02:30.618093] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.148 [2024-07-22 15:02:30.618098] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.148 [2024-07-22 15:02:30.618103] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.148 [2024-07-22 15:02:30.618110] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.148 [2024-07-22 15:02:30.618115] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.148 [2024-07-22 15:02:30.618119] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.148 [2024-07-22 15:02:30.618124] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.148 [2024-07-22 15:02:30.618129] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.148 [2024-07-22 15:02:30.618133] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.148 [2024-07-22 15:02:30.618138] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.148 [2024-07-22 15:02:30.618142] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.148 [2024-07-22 15:02:30.618146] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.148 [2024-07-22 15:02:30.618151] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.148 [2024-07-22 15:02:30.618157] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.148 [2024-07-22 15:02:30.618161] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.148 [2024-07-22 15:02:30.618166] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.148 [2024-07-22 15:02:30.618171] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.148 [2024-07-22 15:02:30.618175] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.148 [2024-07-22 15:02:30.618180] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.148 [2024-07-22 15:02:30.618185] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.148 [2024-07-22 15:02:30.618190] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.148 [2024-07-22 15:02:30.618196] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.148 [2024-07-22 15:02:30.618201] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.148 [2024-07-22 15:02:30.618205] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.148 [2024-07-22 15:02:30.618210] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.148 [2024-07-22 15:02:30.618214] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.148 [2024-07-22 15:02:30.618220] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.148 [2024-07-22 15:02:30.618226] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.148 [2024-07-22 15:02:30.618230] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.148 [2024-07-22 15:02:30.618235] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.148 [2024-07-22 15:02:30.618240] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.148 [2024-07-22 15:02:30.618244] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.148 [2024-07-22 15:02:30.618248] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.148 [2024-07-22 15:02:30.618254] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.148 [2024-07-22 15:02:30.618259] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.148 [2024-07-22 15:02:30.618264] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.148 [2024-07-22 15:02:30.618269] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.148 [2024-07-22 15:02:30.618274] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.148 [2024-07-22 15:02:30.618278] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.148 [2024-07-22 15:02:30.618284] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.148 [2024-07-22 15:02:30.618289] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.148 [2024-07-22 15:02:30.618294] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.148 [2024-07-22 15:02:30.618299] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.148 [2024-07-22 15:02:30.618304] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x981120 is same with the state(5) to be set 00:49:11.148 15:02:30 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 105894 00:49:17.727 0 00:49:17.727 15:02:36 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 105851 00:49:17.727 15:02:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 105851 ']' 00:49:17.727 15:02:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 105851 00:49:17.727 15:02:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:49:17.727 15:02:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:49:17.727 15:02:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 105851 00:49:17.727 killing process with pid 105851 00:49:17.727 15:02:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:49:17.727 15:02:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:49:17.727 15:02:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 105851' 00:49:17.727 15:02:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 105851 00:49:17.727 15:02:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 105851 00:49:17.727 15:02:36 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:49:17.727 [2024-07-22 15:02:20.186164] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:49:17.727 [2024-07-22 15:02:20.186251] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105851 ] 00:49:17.727 [2024-07-22 15:02:20.323803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:17.727 [2024-07-22 15:02:20.370263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:49:17.727 Running I/O for 15 seconds... 00:49:17.727 [2024-07-22 15:02:22.773797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.727 [2024-07-22 15:02:22.773839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.727 [2024-07-22 15:02:22.773860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:99264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.727 [2024-07-22 15:02:22.773870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.727 [2024-07-22 15:02:22.773880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.727 [2024-07-22 15:02:22.773889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.727 [2024-07-22 15:02:22.773900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:99280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.727 [2024-07-22 15:02:22.773908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.727 [2024-07-22 15:02:22.773918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.727 [2024-07-22 15:02:22.773926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.727 [2024-07-22 15:02:22.773936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:99296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.727 [2024-07-22 15:02:22.773944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.727 [2024-07-22 15:02:22.773954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.727 [2024-07-22 15:02:22.773962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.727 [2024-07-22 15:02:22.773972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.727 [2024-07-22 15:02:22.773980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.727 [2024-07-22 15:02:22.773990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.727 [2024-07-22 15:02:22.773999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.727 [2024-07-22 15:02:22.774008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.727 [2024-07-22 15:02:22.774017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.727 [2024-07-22 15:02:22.774026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.727 [2024-07-22 15:02:22.774034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.727 [2024-07-22 15:02:22.774067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.727 [2024-07-22 15:02:22.774076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.727 [2024-07-22 15:02:22.774086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.727 [2024-07-22 15:02:22.774095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.727 [2024-07-22 15:02:22.774105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:99360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.727 [2024-07-22 15:02:22.774114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.727 [2024-07-22 15:02:22.774124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.727 [2024-07-22 15:02:22.774133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.727 [2024-07-22 15:02:22.774143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.727 [2024-07-22 15:02:22.774156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.727 [2024-07-22 15:02:22.774166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:99384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.728 [2024-07-22 15:02:22.774175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.728 [2024-07-22 15:02:22.774185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.728 [2024-07-22 15:02:22.774193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.728 [2024-07-22 15:02:22.774203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.728 [2024-07-22 15:02:22.774212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.728 [2024-07-22 15:02:22.774239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.728 [2024-07-22 15:02:22.774250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.728 [2024-07-22 15:02:22.774261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.728 [2024-07-22 15:02:22.774271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.728 [2024-07-22 15:02:22.774284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:99424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.728 [2024-07-22 15:02:22.774294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.728 [2024-07-22 15:02:22.774307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:99432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.728 [2024-07-22 15:02:22.774317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.728 [2024-07-22 15:02:22.774330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:99440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.728 [2024-07-22 15:02:22.774347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.728 [2024-07-22 15:02:22.774359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.728 [2024-07-22 15:02:22.774370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.728 [2024-07-22 15:02:22.774382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.728 [2024-07-22 15:02:22.774393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.728 [2024-07-22 15:02:22.774405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:99464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.728 [2024-07-22 15:02:22.774415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.728 [2024-07-22 15:02:22.774428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.728 [2024-07-22 15:02:22.774438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.728 [2024-07-22 15:02:22.774450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:99480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.728 [2024-07-22 15:02:22.774460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.728 [2024-07-22 15:02:22.774471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:99488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.728 [2024-07-22 15:02:22.774482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.728 [2024-07-22 15:02:22.774495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.728 [2024-07-22 15:02:22.774506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.728 [2024-07-22 15:02:22.774518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.728 [2024-07-22 15:02:22.774532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.728 [2024-07-22 15:02:22.774544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.728 [2024-07-22 15:02:22.774554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.728 [2024-07-22 15:02:22.774566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.728 [2024-07-22 15:02:22.774577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.728 [2024-07-22 15:02:22.774590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:99528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.728 [2024-07-22 15:02:22.774600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.728 [2024-07-22 15:02:22.774611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.728 [2024-07-22 15:02:22.774620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.728 [2024-07-22 15:02:22.774631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:99544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.728 [2024-07-22 15:02:22.774645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.728 [2024-07-22 15:02:22.774657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.728 [2024-07-22 15:02:22.774666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.728 [2024-07-22 15:02:22.774677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:99560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.728 [2024-07-22 15:02:22.774698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.728 [2024-07-22 15:02:22.774709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.728 [2024-07-22 15:02:22.774719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.728 [2024-07-22 15:02:22.774730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:99576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.728 [2024-07-22 15:02:22.774740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.728 [2024-07-22 15:02:22.774752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:99584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.728 [2024-07-22 15:02:22.774761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.728 [2024-07-22 15:02:22.774772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:99592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.728 [2024-07-22 15:02:22.774782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.728 [2024-07-22 15:02:22.774793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.728 [2024-07-22 15:02:22.774803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.728 [2024-07-22 15:02:22.774813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:99608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.728 [2024-07-22 15:02:22.774822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.728 [2024-07-22 15:02:22.774833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:99616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.728 [2024-07-22 15:02:22.774844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.728 [2024-07-22 15:02:22.774855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.728 [2024-07-22 15:02:22.774864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.728 [2024-07-22 15:02:22.774875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.728 [2024-07-22 15:02:22.774887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.728 [2024-07-22 15:02:22.774898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:99640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.728 [2024-07-22 15:02:22.774907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.728 [2024-07-22 15:02:22.774923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.728 [2024-07-22 15:02:22.774933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.728 [2024-07-22 15:02:22.774944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.728 [2024-07-22 15:02:22.774954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.728 [2024-07-22 15:02:22.774965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.728 [2024-07-22 15:02:22.774976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.728 [2024-07-22 15:02:22.774986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.728 [2024-07-22 15:02:22.774996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.728 [2024-07-22 15:02:22.775008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:99680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.728 [2024-07-22 15:02:22.775017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.728 [2024-07-22 15:02:22.775028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.729 [2024-07-22 15:02:22.775038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.729 [2024-07-22 15:02:22.775048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:99696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.729 [2024-07-22 15:02:22.775058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.729 [2024-07-22 15:02:22.775068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.729 [2024-07-22 15:02:22.775079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.729 [2024-07-22 15:02:22.775089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.729 [2024-07-22 15:02:22.775099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.729 [2024-07-22 15:02:22.775110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:99720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.729 [2024-07-22 15:02:22.775120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.729 [2024-07-22 15:02:22.775131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.729 [2024-07-22 15:02:22.775142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.729 [2024-07-22 15:02:22.775152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.729 [2024-07-22 15:02:22.775162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.729 [2024-07-22 15:02:22.775175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:99744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.729 [2024-07-22 15:02:22.775188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.729 [2024-07-22 15:02:22.775199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.729 [2024-07-22 15:02:22.775209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.729 [2024-07-22 15:02:22.775220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.729 [2024-07-22 15:02:22.775231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.729 [2024-07-22 15:02:22.775242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.729 [2024-07-22 15:02:22.775252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.729 [2024-07-22 15:02:22.775263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.729 [2024-07-22 15:02:22.775272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.729 [2024-07-22 15:02:22.775284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:98840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.729 [2024-07-22 15:02:22.775294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.729 [2024-07-22 15:02:22.775305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.729 [2024-07-22 15:02:22.775315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.729 [2024-07-22 15:02:22.775327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.729 [2024-07-22 15:02:22.775337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.729 [2024-07-22 15:02:22.775350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.729 [2024-07-22 15:02:22.775360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.729 [2024-07-22 15:02:22.775371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:98872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.729 [2024-07-22 15:02:22.775381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.729 [2024-07-22 15:02:22.775392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:98880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.729 [2024-07-22 15:02:22.775401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.729 [2024-07-22 15:02:22.775413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:98888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.729 [2024-07-22 15:02:22.775424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.729 [2024-07-22 15:02:22.775436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.729 [2024-07-22 15:02:22.775446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.729 [2024-07-22 15:02:22.775462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:98904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.729 [2024-07-22 15:02:22.775473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.729 [2024-07-22 15:02:22.775485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:98912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.729 [2024-07-22 15:02:22.775495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.729 [2024-07-22 15:02:22.775507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.729 [2024-07-22 15:02:22.775517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.729 [2024-07-22 15:02:22.775528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.729 [2024-07-22 15:02:22.775537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.729 [2024-07-22 15:02:22.775548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.729 [2024-07-22 15:02:22.775557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.729 [2024-07-22 15:02:22.775568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.729 [2024-07-22 15:02:22.775579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.729 [2024-07-22 15:02:22.775590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:98952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.729 [2024-07-22 15:02:22.775599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.729 [2024-07-22 15:02:22.775610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.729 [2024-07-22 15:02:22.775619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.729 [2024-07-22 15:02:22.775630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:98968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.729 [2024-07-22 15:02:22.775640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.729 [2024-07-22 15:02:22.775651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:98976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.729 [2024-07-22 15:02:22.775671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.729 [2024-07-22 15:02:22.775688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.729 [2024-07-22 15:02:22.775698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.729 [2024-07-22 15:02:22.775708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.729 [2024-07-22 15:02:22.775718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.729 [2024-07-22 15:02:22.775728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.729 [2024-07-22 15:02:22.775737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.729 [2024-07-22 15:02:22.775752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.729 [2024-07-22 15:02:22.775761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.729 [2024-07-22 15:02:22.775771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:99008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.729 [2024-07-22 15:02:22.775780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.729 [2024-07-22 15:02:22.775790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:99016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.729 [2024-07-22 15:02:22.775799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.729 [2024-07-22 15:02:22.775811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.729 [2024-07-22 15:02:22.775819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.729 [2024-07-22 15:02:22.775829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.729 [2024-07-22 15:02:22.775839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.729 [2024-07-22 15:02:22.775848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.729 [2024-07-22 15:02:22.775856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.729 [2024-07-22 15:02:22.775866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.730 [2024-07-22 15:02:22.775875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.730 [2024-07-22 15:02:22.775884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.730 [2024-07-22 15:02:22.775897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.730 [2024-07-22 15:02:22.775907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.730 [2024-07-22 15:02:22.775916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.730 [2024-07-22 15:02:22.775927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:99072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.730 [2024-07-22 15:02:22.775936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.730 [2024-07-22 15:02:22.775946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:99080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.730 [2024-07-22 15:02:22.775955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.730 [2024-07-22 15:02:22.775964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:99088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.730 [2024-07-22 15:02:22.775972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.730 [2024-07-22 15:02:22.775983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.730 [2024-07-22 15:02:22.775997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.730 [2024-07-22 15:02:22.776006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:99104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.730 [2024-07-22 15:02:22.776016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.730 [2024-07-22 15:02:22.776026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:99112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.730 [2024-07-22 15:02:22.776034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.730 [2024-07-22 15:02:22.776045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.730 [2024-07-22 15:02:22.776053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.730 [2024-07-22 15:02:22.776063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:99128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.730 [2024-07-22 15:02:22.776071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.730 [2024-07-22 15:02:22.776081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.730 [2024-07-22 15:02:22.776089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.730 [2024-07-22 15:02:22.776099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:99144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.730 [2024-07-22 15:02:22.776108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.730 [2024-07-22 15:02:22.776118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.730 [2024-07-22 15:02:22.776127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.730 [2024-07-22 15:02:22.776136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:99160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.730 [2024-07-22 15:02:22.776144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.730 [2024-07-22 15:02:22.776154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.730 [2024-07-22 15:02:22.776163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.730 [2024-07-22 15:02:22.776173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.730 [2024-07-22 15:02:22.776180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.730 [2024-07-22 15:02:22.776191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.730 [2024-07-22 15:02:22.776201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.730 [2024-07-22 15:02:22.776212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.730 [2024-07-22 15:02:22.776222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.730 [2024-07-22 15:02:22.776237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:99200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.730 [2024-07-22 15:02:22.776245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.730 [2024-07-22 15:02:22.776255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:99208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.730 [2024-07-22 15:02:22.776263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.730 [2024-07-22 15:02:22.776273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.730 [2024-07-22 15:02:22.776281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.730 [2024-07-22 15:02:22.776291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:99224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.730 [2024-07-22 15:02:22.776299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.730 [2024-07-22 15:02:22.776308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.730 [2024-07-22 15:02:22.776317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.730 [2024-07-22 15:02:22.776327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.730 [2024-07-22 15:02:22.776335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.730 [2024-07-22 15:02:22.776344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.730 [2024-07-22 15:02:22.776353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.730 [2024-07-22 15:02:22.776363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.730 [2024-07-22 15:02:22.776371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.730 [2024-07-22 15:02:22.776381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:99776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.730 [2024-07-22 15:02:22.776389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.730 [2024-07-22 15:02:22.776399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.730 [2024-07-22 15:02:22.776408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.730 [2024-07-22 15:02:22.776418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.730 [2024-07-22 15:02:22.776426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.730 [2024-07-22 15:02:22.776436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.730 [2024-07-22 15:02:22.776444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.730 [2024-07-22 15:02:22.776454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:99808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.730 [2024-07-22 15:02:22.776468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.730 [2024-07-22 15:02:22.776478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.730 [2024-07-22 15:02:22.776487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.730 [2024-07-22 15:02:22.776496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.730 [2024-07-22 15:02:22.776506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.730 [2024-07-22 15:02:22.776516] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6542c0 is same with the state(5) to be set 00:49:17.730 [2024-07-22 15:02:22.776528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:49:17.730 [2024-07-22 15:02:22.776535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:49:17.730 [2024-07-22 15:02:22.776541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99832 len:8 PRP1 0x0 PRP2 0x0 00:49:17.730 [2024-07-22 15:02:22.776551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.730 [2024-07-22 15:02:22.776597] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6542c0 was disconnected and freed. reset controller. 00:49:17.730 [2024-07-22 15:02:22.776632] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:49:17.730 [2024-07-22 15:02:22.776723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:49:17.730 [2024-07-22 15:02:22.776736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.730 [2024-07-22 15:02:22.776746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:49:17.731 [2024-07-22 15:02:22.776756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.731 [2024-07-22 15:02:22.776765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:49:17.731 [2024-07-22 15:02:22.776774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.731 [2024-07-22 15:02:22.776784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:49:17.731 [2024-07-22 15:02:22.776793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.731 [2024-07-22 15:02:22.776802] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:17.731 [2024-07-22 15:02:22.776841] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6350c0 (9): Bad file descriptor 00:49:17.731 [2024-07-22 15:02:22.779915] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:17.731 [2024-07-22 15:02:22.815522] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:49:17.731 [2024-07-22 15:02:26.227859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:49:17.731 [2024-07-22 15:02:26.227928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.731 [2024-07-22 15:02:26.227940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:49:17.731 [2024-07-22 15:02:26.227972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.731 [2024-07-22 15:02:26.227981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:49:17.731 [2024-07-22 15:02:26.227989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.731 [2024-07-22 15:02:26.227998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:49:17.731 [2024-07-22 15:02:26.228006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.731 [2024-07-22 15:02:26.228014] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6350c0 is same with the state(5) to be set 00:49:17.731 [2024-07-22 15:02:26.229451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:43856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.731 [2024-07-22 15:02:26.229476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.731 [2024-07-22 15:02:26.229493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:44120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.731 [2024-07-22 15:02:26.229502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.731 [2024-07-22 15:02:26.229513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:44128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.731 [2024-07-22 15:02:26.229522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.731 [2024-07-22 15:02:26.229532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:44136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.731 [2024-07-22 15:02:26.229541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.731 [2024-07-22 15:02:26.229551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:44144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.731 [2024-07-22 15:02:26.229559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.731 [2024-07-22 15:02:26.229569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:44152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.731 [2024-07-22 15:02:26.229577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.731 [2024-07-22 15:02:26.229587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:44160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.731 [2024-07-22 15:02:26.229595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.731 [2024-07-22 15:02:26.229604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:44168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.731 [2024-07-22 15:02:26.229613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.731 [2024-07-22 15:02:26.229622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:44176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.731 [2024-07-22 15:02:26.229631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.731 [2024-07-22 15:02:26.229641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:44184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.731 [2024-07-22 15:02:26.229649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.731 [2024-07-22 15:02:26.229677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:44192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.731 [2024-07-22 15:02:26.229687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.731 [2024-07-22 15:02:26.229696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:44200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.731 [2024-07-22 15:02:26.229704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.731 [2024-07-22 15:02:26.229714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:44208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.731 [2024-07-22 15:02:26.229722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.731 [2024-07-22 15:02:26.229732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:44216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.731 [2024-07-22 15:02:26.229740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.731 [2024-07-22 15:02:26.229749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:44224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.731 [2024-07-22 15:02:26.229757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.731 [2024-07-22 15:02:26.229767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:44232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.731 [2024-07-22 15:02:26.229775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.731 [2024-07-22 15:02:26.229785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:44240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.731 [2024-07-22 15:02:26.229795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.731 [2024-07-22 15:02:26.229805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:44248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.731 [2024-07-22 15:02:26.229814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.731 [2024-07-22 15:02:26.229824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:44256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.731 [2024-07-22 15:02:26.229833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.731 [2024-07-22 15:02:26.229842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:44264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.731 [2024-07-22 15:02:26.229851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.731 [2024-07-22 15:02:26.229860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:44272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.731 [2024-07-22 15:02:26.229869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.731 [2024-07-22 15:02:26.229891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:44280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.731 [2024-07-22 15:02:26.229899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.731 [2024-07-22 15:02:26.229908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:44288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.731 [2024-07-22 15:02:26.229921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.731 [2024-07-22 15:02:26.229931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:44296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.731 [2024-07-22 15:02:26.229939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.731 [2024-07-22 15:02:26.229948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:44304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.731 [2024-07-22 15:02:26.229956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.731 [2024-07-22 15:02:26.229966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:44312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.731 [2024-07-22 15:02:26.229973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.731 [2024-07-22 15:02:26.229983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:44320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.731 [2024-07-22 15:02:26.229991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.731 [2024-07-22 15:02:26.230000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:44328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.732 [2024-07-22 15:02:26.230007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.732 [2024-07-22 15:02:26.230017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:44336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.732 [2024-07-22 15:02:26.230025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.732 [2024-07-22 15:02:26.230035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.732 [2024-07-22 15:02:26.230043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.732 [2024-07-22 15:02:26.230052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:44352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.732 [2024-07-22 15:02:26.230060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.732 [2024-07-22 15:02:26.230069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.732 [2024-07-22 15:02:26.230077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.732 [2024-07-22 15:02:26.230086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:44368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.732 [2024-07-22 15:02:26.230094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.732 [2024-07-22 15:02:26.230103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:44376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.732 [2024-07-22 15:02:26.230111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.732 [2024-07-22 15:02:26.230119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:44384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.732 [2024-07-22 15:02:26.230128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.732 [2024-07-22 15:02:26.230141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:44392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.732 [2024-07-22 15:02:26.230149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.732 [2024-07-22 15:02:26.230158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:44400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.732 [2024-07-22 15:02:26.230167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.732 [2024-07-22 15:02:26.230176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:44408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.732 [2024-07-22 15:02:26.230184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.732 [2024-07-22 15:02:26.230194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:44416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.732 [2024-07-22 15:02:26.230202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.732 [2024-07-22 15:02:26.230227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:44424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.732 [2024-07-22 15:02:26.230237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.732 [2024-07-22 15:02:26.230246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:44432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.732 [2024-07-22 15:02:26.230254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.732 [2024-07-22 15:02:26.230265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:44440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.732 [2024-07-22 15:02:26.230273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.732 [2024-07-22 15:02:26.230283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:44448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.732 [2024-07-22 15:02:26.230291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.732 [2024-07-22 15:02:26.230301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:44456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.732 [2024-07-22 15:02:26.230310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.732 [2024-07-22 15:02:26.230320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:44464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.732 [2024-07-22 15:02:26.230328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.732 [2024-07-22 15:02:26.230338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:44472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.732 [2024-07-22 15:02:26.230347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.732 [2024-07-22 15:02:26.230356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:44480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.732 [2024-07-22 15:02:26.230364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.732 [2024-07-22 15:02:26.230375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:44488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.732 [2024-07-22 15:02:26.230383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.732 [2024-07-22 15:02:26.230398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:43864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.732 [2024-07-22 15:02:26.230408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.732 [2024-07-22 15:02:26.230418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:43872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.732 [2024-07-22 15:02:26.230427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.732 [2024-07-22 15:02:26.230436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:43880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.732 [2024-07-22 15:02:26.230445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.732 [2024-07-22 15:02:26.230454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:43888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.732 [2024-07-22 15:02:26.230462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.732 [2024-07-22 15:02:26.230472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:43896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.732 [2024-07-22 15:02:26.230480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.732 [2024-07-22 15:02:26.230491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:43904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.732 [2024-07-22 15:02:26.230499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.732 [2024-07-22 15:02:26.230509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:43912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.732 [2024-07-22 15:02:26.230519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.732 [2024-07-22 15:02:26.230528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:43920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.732 [2024-07-22 15:02:26.230536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.732 [2024-07-22 15:02:26.230547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:44496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.732 [2024-07-22 15:02:26.230555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.732 [2024-07-22 15:02:26.230564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.732 [2024-07-22 15:02:26.230572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.732 [2024-07-22 15:02:26.230582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.732 [2024-07-22 15:02:26.230591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.732 [2024-07-22 15:02:26.230601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:44520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.732 [2024-07-22 15:02:26.230609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.732 [2024-07-22 15:02:26.230619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:44528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.732 [2024-07-22 15:02:26.230631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.733 [2024-07-22 15:02:26.230641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:44536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.733 [2024-07-22 15:02:26.230650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.733 [2024-07-22 15:02:26.230659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:44544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.733 [2024-07-22 15:02:26.230667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.733 [2024-07-22 15:02:26.230677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:44552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.733 [2024-07-22 15:02:26.230685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.733 [2024-07-22 15:02:26.230701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.733 [2024-07-22 15:02:26.230710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.733 [2024-07-22 15:02:26.230721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:43928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.733 [2024-07-22 15:02:26.230730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.733 [2024-07-22 15:02:26.230739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:43936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.733 [2024-07-22 15:02:26.230748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.733 [2024-07-22 15:02:26.230758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:43944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.733 [2024-07-22 15:02:26.230767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.733 [2024-07-22 15:02:26.230778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.733 [2024-07-22 15:02:26.230786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.733 [2024-07-22 15:02:26.230797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:43960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.733 [2024-07-22 15:02:26.230805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.733 [2024-07-22 15:02:26.230815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:43968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.733 [2024-07-22 15:02:26.230824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.733 [2024-07-22 15:02:26.230834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:43976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.733 [2024-07-22 15:02:26.230842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.733 [2024-07-22 15:02:26.230852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:43984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.733 [2024-07-22 15:02:26.230861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.733 [2024-07-22 15:02:26.230875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:43992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.733 [2024-07-22 15:02:26.230883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.733 [2024-07-22 15:02:26.230893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:44000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.733 [2024-07-22 15:02:26.230902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.733 [2024-07-22 15:02:26.230911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:44008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.733 [2024-07-22 15:02:26.230920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.733 [2024-07-22 15:02:26.230930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:44016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.733 [2024-07-22 15:02:26.230938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.733 [2024-07-22 15:02:26.230948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:44024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.733 [2024-07-22 15:02:26.230956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.733 [2024-07-22 15:02:26.230966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:44032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.733 [2024-07-22 15:02:26.230974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.733 [2024-07-22 15:02:26.230984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:44040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.733 [2024-07-22 15:02:26.230992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.733 [2024-07-22 15:02:26.231003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:44048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.733 [2024-07-22 15:02:26.231014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.733 [2024-07-22 15:02:26.231024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:44056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.733 [2024-07-22 15:02:26.231033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.733 [2024-07-22 15:02:26.231042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:44064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.733 [2024-07-22 15:02:26.231050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.733 [2024-07-22 15:02:26.231060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:44072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.733 [2024-07-22 15:02:26.231068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.733 [2024-07-22 15:02:26.231078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:44080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.733 [2024-07-22 15:02:26.231088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.733 [2024-07-22 15:02:26.231098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:44088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.733 [2024-07-22 15:02:26.231113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.733 [2024-07-22 15:02:26.231124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:44096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.733 [2024-07-22 15:02:26.231133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.733 [2024-07-22 15:02:26.231142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:44104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.733 [2024-07-22 15:02:26.231151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.733 [2024-07-22 15:02:26.231161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:44112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.733 [2024-07-22 15:02:26.231169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.733 [2024-07-22 15:02:26.231179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:44568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.733 [2024-07-22 15:02:26.231187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.733 [2024-07-22 15:02:26.231197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:44576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.733 [2024-07-22 15:02:26.231205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.733 [2024-07-22 15:02:26.231215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:44584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.733 [2024-07-22 15:02:26.231224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.733 [2024-07-22 15:02:26.231234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:44592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.733 [2024-07-22 15:02:26.231242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.733 [2024-07-22 15:02:26.231251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:44600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.733 [2024-07-22 15:02:26.231259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.733 [2024-07-22 15:02:26.231269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:44608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.733 [2024-07-22 15:02:26.231278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.733 [2024-07-22 15:02:26.231288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:44616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.733 [2024-07-22 15:02:26.231296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.733 [2024-07-22 15:02:26.231305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:44624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.733 [2024-07-22 15:02:26.231316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.733 [2024-07-22 15:02:26.231325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:44632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.733 [2024-07-22 15:02:26.231333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.733 [2024-07-22 15:02:26.231346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:44640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.733 [2024-07-22 15:02:26.231356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.733 [2024-07-22 15:02:26.231366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:44648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.734 [2024-07-22 15:02:26.231374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.734 [2024-07-22 15:02:26.231384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:44656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.734 [2024-07-22 15:02:26.231392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.734 [2024-07-22 15:02:26.231402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.734 [2024-07-22 15:02:26.231410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.734 [2024-07-22 15:02:26.231421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:44672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.734 [2024-07-22 15:02:26.231429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.734 [2024-07-22 15:02:26.231439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:44680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.734 [2024-07-22 15:02:26.231448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.734 [2024-07-22 15:02:26.231458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:44688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.734 [2024-07-22 15:02:26.231466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.734 [2024-07-22 15:02:26.231475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:44696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.734 [2024-07-22 15:02:26.231484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.734 [2024-07-22 15:02:26.231493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:44704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.734 [2024-07-22 15:02:26.231503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.734 [2024-07-22 15:02:26.231513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:44712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.734 [2024-07-22 15:02:26.231521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.734 [2024-07-22 15:02:26.231531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:44720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.734 [2024-07-22 15:02:26.231539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.734 [2024-07-22 15:02:26.231550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:44728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.734 [2024-07-22 15:02:26.231558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.734 [2024-07-22 15:02:26.231567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:44736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.734 [2024-07-22 15:02:26.231577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.734 [2024-07-22 15:02:26.231590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:44744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.734 [2024-07-22 15:02:26.231598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.734 [2024-07-22 15:02:26.231608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:44752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.734 [2024-07-22 15:02:26.231617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.734 [2024-07-22 15:02:26.231626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:44760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.734 [2024-07-22 15:02:26.231636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.734 [2024-07-22 15:02:26.231645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:44768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.734 [2024-07-22 15:02:26.231653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.734 [2024-07-22 15:02:26.231663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:44776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.734 [2024-07-22 15:02:26.231682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.734 [2024-07-22 15:02:26.231692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:44784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.734 [2024-07-22 15:02:26.231700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.734 [2024-07-22 15:02:26.231735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:44792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.734 [2024-07-22 15:02:26.231744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.734 [2024-07-22 15:02:26.231755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:44800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.734 [2024-07-22 15:02:26.231763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.734 [2024-07-22 15:02:26.231773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:44808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.734 [2024-07-22 15:02:26.231781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.734 [2024-07-22 15:02:26.231792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:44816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.734 [2024-07-22 15:02:26.231801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.734 [2024-07-22 15:02:26.231831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:49:17.734 [2024-07-22 15:02:26.231839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44824 len:8 PRP1 0x0 PRP2 0x0 00:49:17.734 [2024-07-22 15:02:26.231847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.734 [2024-07-22 15:02:26.231858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:49:17.734 [2024-07-22 15:02:26.231864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:49:17.734 [2024-07-22 15:02:26.231870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44832 len:8 PRP1 0x0 PRP2 0x0 00:49:17.734 [2024-07-22 15:02:26.231885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.734 [2024-07-22 15:02:26.231894] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:49:17.734 [2024-07-22 15:02:26.231899] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:49:17.734 [2024-07-22 15:02:26.231906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44840 len:8 PRP1 0x0 PRP2 0x0 00:49:17.734 [2024-07-22 15:02:26.231913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.734 [2024-07-22 15:02:26.231922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:49:17.734 [2024-07-22 15:02:26.231928] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:49:17.734 [2024-07-22 15:02:26.231935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44848 len:8 PRP1 0x0 PRP2 0x0 00:49:17.734 [2024-07-22 15:02:26.231955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.734 [2024-07-22 15:02:26.231965] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:49:17.734 [2024-07-22 15:02:26.231970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:49:17.734 [2024-07-22 15:02:26.231976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44856 len:8 PRP1 0x0 PRP2 0x0 00:49:17.734 [2024-07-22 15:02:26.231983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.734 [2024-07-22 15:02:26.231991] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:49:17.734 [2024-07-22 15:02:26.231996] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:49:17.734 [2024-07-22 15:02:26.232006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44864 len:8 PRP1 0x0 PRP2 0x0 00:49:17.734 [2024-07-22 15:02:26.232014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.734 [2024-07-22 15:02:26.232022] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:49:17.734 [2024-07-22 15:02:26.232027] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:49:17.734 [2024-07-22 15:02:26.232033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44872 len:8 PRP1 0x0 PRP2 0x0 00:49:17.734 [2024-07-22 15:02:26.232040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.734 [2024-07-22 15:02:26.232083] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x801180 was disconnected and freed. reset controller. 00:49:17.734 [2024-07-22 15:02:26.232093] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:49:17.734 [2024-07-22 15:02:26.232102] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:17.734 [2024-07-22 15:02:26.234845] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:17.734 [2024-07-22 15:02:26.234881] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6350c0 (9): Bad file descriptor 00:49:17.734 [2024-07-22 15:02:26.268020] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:49:17.734 [2024-07-22 15:02:30.619730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:50800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.734 [2024-07-22 15:02:30.619778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.734 [2024-07-22 15:02:30.619798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:50808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.734 [2024-07-22 15:02:30.619890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.734 [2024-07-22 15:02:30.619901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:51176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.734 [2024-07-22 15:02:30.619910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.734 [2024-07-22 15:02:30.619920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:51184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.734 [2024-07-22 15:02:30.619928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.735 [2024-07-22 15:02:30.619937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:51192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.735 [2024-07-22 15:02:30.619946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.735 [2024-07-22 15:02:30.619955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:51200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.735 [2024-07-22 15:02:30.619963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.735 [2024-07-22 15:02:30.619973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:51208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.735 [2024-07-22 15:02:30.619981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.735 [2024-07-22 15:02:30.619991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:51216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.735 [2024-07-22 15:02:30.619999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.735 [2024-07-22 15:02:30.620009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:51224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.735 [2024-07-22 15:02:30.620018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.735 [2024-07-22 15:02:30.620027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:51232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.735 [2024-07-22 15:02:30.620035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.735 [2024-07-22 15:02:30.620045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:51240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.735 [2024-07-22 15:02:30.620054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.735 [2024-07-22 15:02:30.620064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:51248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.735 [2024-07-22 15:02:30.620072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.735 [2024-07-22 15:02:30.620081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:51256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.735 [2024-07-22 15:02:30.620089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.735 [2024-07-22 15:02:30.620099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:51264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.735 [2024-07-22 15:02:30.620107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.735 [2024-07-22 15:02:30.620122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:51272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.735 [2024-07-22 15:02:30.620130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.735 [2024-07-22 15:02:30.620140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:51280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.735 [2024-07-22 15:02:30.620147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.735 [2024-07-22 15:02:30.620158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:51288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.735 [2024-07-22 15:02:30.620168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.735 [2024-07-22 15:02:30.620178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:51296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.735 [2024-07-22 15:02:30.620185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.735 [2024-07-22 15:02:30.620196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:50816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.735 [2024-07-22 15:02:30.620204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.735 [2024-07-22 15:02:30.620214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:50824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.735 [2024-07-22 15:02:30.620223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.735 [2024-07-22 15:02:30.620233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:50832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.735 [2024-07-22 15:02:30.620241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.735 [2024-07-22 15:02:30.620250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:50840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.735 [2024-07-22 15:02:30.620259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.735 [2024-07-22 15:02:30.620269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:50848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.735 [2024-07-22 15:02:30.620277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.735 [2024-07-22 15:02:30.620287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:50856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.735 [2024-07-22 15:02:30.620295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.735 [2024-07-22 15:02:30.620305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:50864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.735 [2024-07-22 15:02:30.620313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.735 [2024-07-22 15:02:30.620323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:50872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.735 [2024-07-22 15:02:30.620331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.735 [2024-07-22 15:02:30.620341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:50880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.735 [2024-07-22 15:02:30.620349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.735 [2024-07-22 15:02:30.620364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:50888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.735 [2024-07-22 15:02:30.620372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.735 [2024-07-22 15:02:30.620381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:50896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.735 [2024-07-22 15:02:30.620389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.735 [2024-07-22 15:02:30.620399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:50904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.735 [2024-07-22 15:02:30.620407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.735 [2024-07-22 15:02:30.620416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:50912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.735 [2024-07-22 15:02:30.620425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.735 [2024-07-22 15:02:30.620435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:50920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.735 [2024-07-22 15:02:30.620443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.735 [2024-07-22 15:02:30.620452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:50928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.735 [2024-07-22 15:02:30.620461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.735 [2024-07-22 15:02:30.620471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:50936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.735 [2024-07-22 15:02:30.620479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.735 [2024-07-22 15:02:30.620490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:50944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.735 [2024-07-22 15:02:30.620498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.735 [2024-07-22 15:02:30.620508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:50952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.735 [2024-07-22 15:02:30.620515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.735 [2024-07-22 15:02:30.620525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:50960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.735 [2024-07-22 15:02:30.620532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.735 [2024-07-22 15:02:30.620542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:50968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.735 [2024-07-22 15:02:30.620551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.735 [2024-07-22 15:02:30.620560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:50976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.735 [2024-07-22 15:02:30.620568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.735 [2024-07-22 15:02:30.620578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:50984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.735 [2024-07-22 15:02:30.620590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.735 [2024-07-22 15:02:30.620601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:50992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.735 [2024-07-22 15:02:30.620620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.735 [2024-07-22 15:02:30.620630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:51000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.735 [2024-07-22 15:02:30.620638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.735 [2024-07-22 15:02:30.620648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:51008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.735 [2024-07-22 15:02:30.620659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.735 [2024-07-22 15:02:30.620678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:51016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.735 [2024-07-22 15:02:30.620687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.736 [2024-07-22 15:02:30.620697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.736 [2024-07-22 15:02:30.620706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.736 [2024-07-22 15:02:30.620715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:51032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.736 [2024-07-22 15:02:30.620724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.736 [2024-07-22 15:02:30.620734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:51040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.736 [2024-07-22 15:02:30.620742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.736 [2024-07-22 15:02:30.620752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:51048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.736 [2024-07-22 15:02:30.620761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.736 [2024-07-22 15:02:30.620770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:51304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.736 [2024-07-22 15:02:30.620779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.736 [2024-07-22 15:02:30.620789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:51056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.736 [2024-07-22 15:02:30.620797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.736 [2024-07-22 15:02:30.620807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:51064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.736 [2024-07-22 15:02:30.620816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.736 [2024-07-22 15:02:30.620826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:51072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.736 [2024-07-22 15:02:30.620834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.736 [2024-07-22 15:02:30.620848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:51080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.736 [2024-07-22 15:02:30.620856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.736 [2024-07-22 15:02:30.620866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:51088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.736 [2024-07-22 15:02:30.620875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.736 [2024-07-22 15:02:30.620885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:51096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.736 [2024-07-22 15:02:30.620893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.736 [2024-07-22 15:02:30.620902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:51104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.736 [2024-07-22 15:02:30.620911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.736 [2024-07-22 15:02:30.620920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:51112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.736 [2024-07-22 15:02:30.620929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.736 [2024-07-22 15:02:30.620939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:51120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.736 [2024-07-22 15:02:30.620947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.736 [2024-07-22 15:02:30.620957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:51128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.736 [2024-07-22 15:02:30.620965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.736 [2024-07-22 15:02:30.620975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:51136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.736 [2024-07-22 15:02:30.620984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.736 [2024-07-22 15:02:30.620994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:51144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.736 [2024-07-22 15:02:30.621002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.736 [2024-07-22 15:02:30.621012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:51152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.736 [2024-07-22 15:02:30.621020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.736 [2024-07-22 15:02:30.621030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:51160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.736 [2024-07-22 15:02:30.621038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.736 [2024-07-22 15:02:30.621047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:51168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:49:17.736 [2024-07-22 15:02:30.621055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.736 [2024-07-22 15:02:30.621066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:51312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.736 [2024-07-22 15:02:30.621079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.736 [2024-07-22 15:02:30.621089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:51320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.736 [2024-07-22 15:02:30.621098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.736 [2024-07-22 15:02:30.621108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:51328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.736 [2024-07-22 15:02:30.621116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.736 [2024-07-22 15:02:30.621126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:51336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.736 [2024-07-22 15:02:30.621134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.736 [2024-07-22 15:02:30.621143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:51344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.736 [2024-07-22 15:02:30.621151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.736 [2024-07-22 15:02:30.621160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:51352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.736 [2024-07-22 15:02:30.621169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.736 [2024-07-22 15:02:30.621179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:51360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.736 [2024-07-22 15:02:30.621187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.736 [2024-07-22 15:02:30.621197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:51368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.736 [2024-07-22 15:02:30.621206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.736 [2024-07-22 15:02:30.621216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:51376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.736 [2024-07-22 15:02:30.621224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.736 [2024-07-22 15:02:30.621234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:51384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.736 [2024-07-22 15:02:30.621242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.736 [2024-07-22 15:02:30.621252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:51392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.736 [2024-07-22 15:02:30.621260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.736 [2024-07-22 15:02:30.621270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:51400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.736 [2024-07-22 15:02:30.621279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.736 [2024-07-22 15:02:30.621288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:51408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.736 [2024-07-22 15:02:30.621296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.736 [2024-07-22 15:02:30.621336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:51416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.737 [2024-07-22 15:02:30.621346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.737 [2024-07-22 15:02:30.621355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:51424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.737 [2024-07-22 15:02:30.621364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.737 [2024-07-22 15:02:30.621374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:51432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.737 [2024-07-22 15:02:30.621382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.737 [2024-07-22 15:02:30.621392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:51440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.737 [2024-07-22 15:02:30.621401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.737 [2024-07-22 15:02:30.621411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:51448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.737 [2024-07-22 15:02:30.621419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.737 [2024-07-22 15:02:30.621429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:51456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.737 [2024-07-22 15:02:30.621438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.737 [2024-07-22 15:02:30.621448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:51464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.737 [2024-07-22 15:02:30.621456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.737 [2024-07-22 15:02:30.621466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:51472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.737 [2024-07-22 15:02:30.621475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.737 [2024-07-22 15:02:30.621485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:51480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.737 [2024-07-22 15:02:30.621493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.737 [2024-07-22 15:02:30.621503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:51488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.737 [2024-07-22 15:02:30.621511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.737 [2024-07-22 15:02:30.621521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:51496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.737 [2024-07-22 15:02:30.621530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.737 [2024-07-22 15:02:30.621539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:51504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.737 [2024-07-22 15:02:30.621548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.737 [2024-07-22 15:02:30.621557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:51512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.737 [2024-07-22 15:02:30.621565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.737 [2024-07-22 15:02:30.621580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:51520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.737 [2024-07-22 15:02:30.621590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.737 [2024-07-22 15:02:30.621599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:51528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.737 [2024-07-22 15:02:30.621607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.737 [2024-07-22 15:02:30.621617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:51536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.737 [2024-07-22 15:02:30.621627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.737 [2024-07-22 15:02:30.621636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:51544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.737 [2024-07-22 15:02:30.621644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.737 [2024-07-22 15:02:30.621654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:51552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.737 [2024-07-22 15:02:30.621663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.737 [2024-07-22 15:02:30.621681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:51560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:49:17.737 [2024-07-22 15:02:30.621689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.737 [2024-07-22 15:02:30.621714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:49:17.737 [2024-07-22 15:02:30.621727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51568 len:8 PRP1 0x0 PRP2 0x0 00:49:17.737 [2024-07-22 15:02:30.621735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.737 [2024-07-22 15:02:30.621746] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:49:17.737 [2024-07-22 15:02:30.621752] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:49:17.737 [2024-07-22 15:02:30.621759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51576 len:8 PRP1 0x0 PRP2 0x0 00:49:17.737 [2024-07-22 15:02:30.621766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.737 [2024-07-22 15:02:30.621775] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:49:17.737 [2024-07-22 15:02:30.621781] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:49:17.737 [2024-07-22 15:02:30.621788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51584 len:8 PRP1 0x0 PRP2 0x0 00:49:17.737 [2024-07-22 15:02:30.621796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.737 [2024-07-22 15:02:30.621804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:49:17.737 [2024-07-22 15:02:30.621810] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:49:17.737 [2024-07-22 15:02:30.621816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51592 len:8 PRP1 0x0 PRP2 0x0 00:49:17.737 [2024-07-22 15:02:30.621824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.737 [2024-07-22 15:02:30.621833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:49:17.737 [2024-07-22 15:02:30.621843] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:49:17.737 [2024-07-22 15:02:30.621850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51600 len:8 PRP1 0x0 PRP2 0x0 00:49:17.737 [2024-07-22 15:02:30.621858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.737 [2024-07-22 15:02:30.621870] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:49:17.737 [2024-07-22 15:02:30.621876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:49:17.737 [2024-07-22 15:02:30.621882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51608 len:8 PRP1 0x0 PRP2 0x0 00:49:17.737 [2024-07-22 15:02:30.621903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.737 [2024-07-22 15:02:30.621911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:49:17.737 [2024-07-22 15:02:30.621916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:49:17.737 [2024-07-22 15:02:30.621922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51616 len:8 PRP1 0x0 PRP2 0x0 00:49:17.737 [2024-07-22 15:02:30.621930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.737 [2024-07-22 15:02:30.621938] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:49:17.737 [2024-07-22 15:02:30.621943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:49:17.737 [2024-07-22 15:02:30.621949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51624 len:8 PRP1 0x0 PRP2 0x0 00:49:17.737 [2024-07-22 15:02:30.621956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.737 [2024-07-22 15:02:30.621964] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:49:17.737 [2024-07-22 15:02:30.621970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:49:17.737 [2024-07-22 15:02:30.621977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51632 len:8 PRP1 0x0 PRP2 0x0 00:49:17.737 [2024-07-22 15:02:30.621985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.737 [2024-07-22 15:02:30.621993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:49:17.737 [2024-07-22 15:02:30.621998] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:49:17.737 [2024-07-22 15:02:30.622004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51640 len:8 PRP1 0x0 PRP2 0x0 00:49:17.737 [2024-07-22 15:02:30.622012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.737 [2024-07-22 15:02:30.622019] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:49:17.737 [2024-07-22 15:02:30.622024] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:49:17.737 [2024-07-22 15:02:30.622030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51648 len:8 PRP1 0x0 PRP2 0x0 00:49:17.737 [2024-07-22 15:02:30.622037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.737 [2024-07-22 15:02:30.622045] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:49:17.737 [2024-07-22 15:02:30.622051] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:49:17.737 [2024-07-22 15:02:30.622057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51656 len:8 PRP1 0x0 PRP2 0x0 00:49:17.737 [2024-07-22 15:02:30.622064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.737 [2024-07-22 15:02:30.622075] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:49:17.737 [2024-07-22 15:02:30.622081] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:49:17.737 [2024-07-22 15:02:30.622087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51664 len:8 PRP1 0x0 PRP2 0x0 00:49:17.738 [2024-07-22 15:02:30.622094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.738 [2024-07-22 15:02:30.622103] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:49:17.738 [2024-07-22 15:02:30.622108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:49:17.738 [2024-07-22 15:02:30.622114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51672 len:8 PRP1 0x0 PRP2 0x0 00:49:17.738 [2024-07-22 15:02:30.622122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.738 [2024-07-22 15:02:30.622130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:49:17.738 [2024-07-22 15:02:30.622136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:49:17.738 [2024-07-22 15:02:30.622141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51680 len:8 PRP1 0x0 PRP2 0x0 00:49:17.738 [2024-07-22 15:02:30.622149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.738 [2024-07-22 15:02:30.622157] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:49:17.738 [2024-07-22 15:02:30.622162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:49:17.738 [2024-07-22 15:02:30.622168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51688 len:8 PRP1 0x0 PRP2 0x0 00:49:17.738 [2024-07-22 15:02:30.622175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.738 [2024-07-22 15:02:30.622183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:49:17.738 [2024-07-22 15:02:30.622188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:49:17.738 [2024-07-22 15:02:30.622195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51696 len:8 PRP1 0x0 PRP2 0x0 00:49:17.738 [2024-07-22 15:02:30.622203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.738 [2024-07-22 15:02:30.622210] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:49:17.738 [2024-07-22 15:02:30.622216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:49:17.738 [2024-07-22 15:02:30.622222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51704 len:8 PRP1 0x0 PRP2 0x0 00:49:17.738 [2024-07-22 15:02:30.622230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.738 [2024-07-22 15:02:30.622237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:49:17.738 [2024-07-22 15:02:30.622242] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:49:17.738 [2024-07-22 15:02:30.622248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51712 len:8 PRP1 0x0 PRP2 0x0 00:49:17.738 [2024-07-22 15:02:30.622255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.738 [2024-07-22 15:02:30.622263] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:49:17.738 [2024-07-22 15:02:30.622268] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:49:17.738 [2024-07-22 15:02:30.622274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51720 len:8 PRP1 0x0 PRP2 0x0 00:49:17.738 [2024-07-22 15:02:30.622284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.738 [2024-07-22 15:02:30.622294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:49:17.738 [2024-07-22 15:02:30.622299] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:49:17.738 [2024-07-22 15:02:30.622304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51728 len:8 PRP1 0x0 PRP2 0x0 00:49:17.738 [2024-07-22 15:02:30.622312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.738 [2024-07-22 15:02:30.622321] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:49:17.738 [2024-07-22 15:02:30.622326] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:49:17.738 [2024-07-22 15:02:30.622332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51736 len:8 PRP1 0x0 PRP2 0x0 00:49:17.738 [2024-07-22 15:02:30.622339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.738 [2024-07-22 15:02:30.638736] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:49:17.738 [2024-07-22 15:02:30.638755] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:49:17.738 [2024-07-22 15:02:30.638763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51744 len:8 PRP1 0x0 PRP2 0x0 00:49:17.738 [2024-07-22 15:02:30.638774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.738 [2024-07-22 15:02:30.638786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:49:17.738 [2024-07-22 15:02:30.638794] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:49:17.738 [2024-07-22 15:02:30.638802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51752 len:8 PRP1 0x0 PRP2 0x0 00:49:17.738 [2024-07-22 15:02:30.638813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.738 [2024-07-22 15:02:30.638836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:49:17.738 [2024-07-22 15:02:30.638846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:49:17.738 [2024-07-22 15:02:30.638859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51760 len:8 PRP1 0x0 PRP2 0x0 00:49:17.738 [2024-07-22 15:02:30.638873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.738 [2024-07-22 15:02:30.638887] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:49:17.738 [2024-07-22 15:02:30.638897] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:49:17.738 [2024-07-22 15:02:30.638908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51768 len:8 PRP1 0x0 PRP2 0x0 00:49:17.738 [2024-07-22 15:02:30.638922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.738 [2024-07-22 15:02:30.638936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:49:17.738 [2024-07-22 15:02:30.638946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:49:17.738 [2024-07-22 15:02:30.638957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51776 len:8 PRP1 0x0 PRP2 0x0 00:49:17.738 [2024-07-22 15:02:30.638970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.738 [2024-07-22 15:02:30.638985] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:49:17.738 [2024-07-22 15:02:30.639005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:49:17.738 [2024-07-22 15:02:30.639016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51784 len:8 PRP1 0x0 PRP2 0x0 00:49:17.738 [2024-07-22 15:02:30.639031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.738 [2024-07-22 15:02:30.639044] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:49:17.738 [2024-07-22 15:02:30.639055] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:49:17.738 [2024-07-22 15:02:30.639065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51792 len:8 PRP1 0x0 PRP2 0x0 00:49:17.738 [2024-07-22 15:02:30.639079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.738 [2024-07-22 15:02:30.639095] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:49:17.738 [2024-07-22 15:02:30.639105] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:49:17.738 [2024-07-22 15:02:30.639115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51800 len:8 PRP1 0x0 PRP2 0x0 00:49:17.738 [2024-07-22 15:02:30.639129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.738 [2024-07-22 15:02:30.639143] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:49:17.738 [2024-07-22 15:02:30.639154] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:49:17.738 [2024-07-22 15:02:30.639164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51808 len:8 PRP1 0x0 PRP2 0x0 00:49:17.738 [2024-07-22 15:02:30.639177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.738 [2024-07-22 15:02:30.639192] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:49:17.738 [2024-07-22 15:02:30.639201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:49:17.738 [2024-07-22 15:02:30.639212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51816 len:8 PRP1 0x0 PRP2 0x0 00:49:17.738 [2024-07-22 15:02:30.639226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.738 [2024-07-22 15:02:30.639289] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x801140 was disconnected and freed. reset controller. 00:49:17.738 [2024-07-22 15:02:30.639307] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:49:17.738 [2024-07-22 15:02:30.639372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:49:17.738 [2024-07-22 15:02:30.639391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.738 [2024-07-22 15:02:30.639411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:49:17.738 [2024-07-22 15:02:30.639425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.738 [2024-07-22 15:02:30.639441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:49:17.738 [2024-07-22 15:02:30.639456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.738 [2024-07-22 15:02:30.639471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:49:17.738 [2024-07-22 15:02:30.639485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:17.738 [2024-07-22 15:02:30.639508] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:17.738 [2024-07-22 15:02:30.639566] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6350c0 (9): Bad file descriptor 00:49:17.738 [2024-07-22 15:02:30.644339] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:17.738 [2024-07-22 15:02:30.679335] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:49:17.738 00:49:17.738 Latency(us) 00:49:17.739 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:49:17.739 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:49:17.739 Verification LBA range: start 0x0 length 0x4000 00:49:17.739 NVMe0n1 : 15.01 11393.23 44.50 297.82 0.00 10925.43 622.45 49910.39 00:49:17.739 =================================================================================================================== 00:49:17.739 Total : 11393.23 44.50 297.82 0.00 10925.43 622.45 49910.39 00:49:17.739 Received shutdown signal, test time was about 15.000000 seconds 00:49:17.739 00:49:17.739 Latency(us) 00:49:17.739 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:49:17.739 =================================================================================================================== 00:49:17.739 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:49:17.739 15:02:36 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:49:17.739 15:02:36 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:49:17.739 15:02:36 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:49:17.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:49:17.739 15:02:36 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=106095 00:49:17.739 15:02:36 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:49:17.739 15:02:36 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 106095 /var/tmp/bdevperf.sock 00:49:17.739 15:02:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 106095 ']' 00:49:17.739 15:02:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:49:17.739 15:02:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:49:17.739 15:02:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:49:17.739 15:02:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:49:17.739 15:02:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:49:18.307 15:02:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:49:18.307 15:02:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:49:18.307 15:02:37 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:49:18.566 [2024-07-22 15:02:38.031585] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:49:18.566 15:02:38 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:49:18.827 [2024-07-22 15:02:38.243387] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:49:18.827 15:02:38 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:49:19.086 NVMe0n1 00:49:19.086 15:02:38 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:49:19.345 00:49:19.345 15:02:38 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:49:19.613 00:49:19.613 15:02:39 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:49:19.613 15:02:39 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:49:19.891 15:02:39 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:49:19.891 15:02:39 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:49:23.186 15:02:42 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:49:23.186 15:02:42 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:49:23.186 15:02:42 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:49:23.186 15:02:42 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=106228 00:49:23.186 15:02:42 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 106228 00:49:24.565 0 00:49:24.565 15:02:43 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:49:24.565 [2024-07-22 15:02:36.963177] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:49:24.565 [2024-07-22 15:02:36.963270] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106095 ] 00:49:24.565 [2024-07-22 15:02:37.090324] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:24.565 [2024-07-22 15:02:37.159744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:49:24.565 [2024-07-22 15:02:39.454998] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:49:24.565 [2024-07-22 15:02:39.455108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:49:24.565 [2024-07-22 15:02:39.455125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:24.565 [2024-07-22 15:02:39.455138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:49:24.565 [2024-07-22 15:02:39.455148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:24.565 [2024-07-22 15:02:39.455159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:49:24.565 [2024-07-22 15:02:39.455169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:24.565 [2024-07-22 15:02:39.455179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:49:24.565 [2024-07-22 15:02:39.455188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:24.565 [2024-07-22 15:02:39.455198] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:49:24.565 [2024-07-22 15:02:39.455236] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:49:24.565 [2024-07-22 15:02:39.455258] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecc0c0 (9): Bad file descriptor 00:49:24.565 [2024-07-22 15:02:39.460063] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:49:24.565 Running I/O for 1 seconds... 00:49:24.565 00:49:24.565 Latency(us) 00:49:24.565 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:49:24.565 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:49:24.565 Verification LBA range: start 0x0 length 0x4000 00:49:24.565 NVMe0n1 : 1.00 9662.24 37.74 0.00 0.00 13195.28 1559.70 14480.88 00:49:24.565 =================================================================================================================== 00:49:24.565 Total : 9662.24 37.74 0.00 0.00 13195.28 1559.70 14480.88 00:49:24.565 15:02:43 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:49:24.565 15:02:43 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:49:24.565 15:02:44 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:49:24.824 15:02:44 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:49:24.824 15:02:44 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:49:24.824 15:02:44 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:49:25.084 15:02:44 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:49:28.374 15:02:47 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:49:28.374 15:02:47 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:49:28.374 15:02:47 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 106095 00:49:28.374 15:02:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 106095 ']' 00:49:28.374 15:02:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 106095 00:49:28.374 15:02:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:49:28.374 15:02:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:49:28.374 15:02:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 106095 00:49:28.374 killing process with pid 106095 00:49:28.374 15:02:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:49:28.374 15:02:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:49:28.374 15:02:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 106095' 00:49:28.374 15:02:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 106095 00:49:28.374 15:02:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 106095 00:49:28.633 15:02:48 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:49:28.633 15:02:48 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:49:28.892 15:02:48 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:49:28.892 15:02:48 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:49:28.892 15:02:48 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:49:28.892 15:02:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:49:28.892 15:02:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:49:28.892 15:02:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:49:28.892 15:02:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:49:28.892 15:02:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:49:28.892 15:02:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:49:28.892 rmmod nvme_tcp 00:49:28.892 rmmod nvme_fabrics 00:49:28.892 rmmod nvme_keyring 00:49:28.892 15:02:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:49:28.892 15:02:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:49:28.892 15:02:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:49:28.892 15:02:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 105745 ']' 00:49:28.892 15:02:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 105745 00:49:28.892 15:02:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 105745 ']' 00:49:28.892 15:02:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 105745 00:49:28.892 15:02:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:49:28.892 15:02:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:49:28.892 15:02:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 105745 00:49:28.892 killing process with pid 105745 00:49:28.892 15:02:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:49:28.892 15:02:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:49:28.892 15:02:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 105745' 00:49:28.893 15:02:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 105745 00:49:28.893 15:02:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 105745 00:49:29.153 15:02:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:49:29.153 15:02:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:49:29.153 15:02:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:49:29.153 15:02:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:49:29.153 15:02:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:49:29.153 15:02:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:29.153 15:02:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:49:29.153 15:02:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:29.153 15:02:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:49:29.153 00:49:29.153 real 0m31.524s 00:49:29.153 user 2m2.027s 00:49:29.153 sys 0m4.009s 00:49:29.153 15:02:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:49:29.153 ************************************ 00:49:29.153 END TEST nvmf_failover 00:49:29.153 ************************************ 00:49:29.153 15:02:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:49:29.153 15:02:48 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:49:29.153 15:02:48 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:49:29.153 15:02:48 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:49:29.153 15:02:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:49:29.153 ************************************ 00:49:29.153 START TEST nvmf_host_discovery 00:49:29.153 ************************************ 00:49:29.153 15:02:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:49:29.413 * Looking for test storage... 00:49:29.413 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:49:29.413 15:02:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:49:29.413 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:49:29.413 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:49:29.413 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:49:29.413 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:49:29.413 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:49:29.413 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:49:29.413 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:49:29.413 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:49:29.413 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:49:29.413 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:49:29.413 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:49:29.413 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:49:29.413 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:49:29.413 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:49:29.413 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:49:29.413 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:49:29.413 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:49:29.413 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:49:29.413 15:02:48 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:49:29.413 15:02:48 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:49:29.413 15:02:48 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:49:29.413 15:02:48 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:29.413 15:02:48 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:29.413 15:02:48 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:29.413 15:02:48 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:49:29.413 15:02:48 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:29.413 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:49:29.413 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:49:29.413 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:49:29.413 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:49:29.414 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:49:29.414 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:49:29.414 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:49:29.414 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:49:29.414 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:49:29.414 15:02:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:49:29.414 15:02:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:49:29.414 15:02:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:49:29.414 15:02:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:49:29.414 15:02:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:49:29.414 15:02:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:49:29.414 15:02:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:49:29.414 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:49:29.414 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:49:29.414 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:49:29.414 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:49:29.414 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:49:29.414 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:29.414 15:02:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:49:29.414 15:02:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:29.414 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:49:29.414 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:49:29.414 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:49:29.414 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:49:29.414 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:49:29.414 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:49:29.414 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:49:29.414 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:49:29.414 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:49:29.414 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:49:29.414 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:49:29.414 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:49:29.414 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:49:29.414 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:49:29.414 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:49:29.414 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:49:29.414 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:49:29.414 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:49:29.414 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:49:29.414 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:49:29.414 Cannot find device "nvmf_tgt_br" 00:49:29.414 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:49:29.414 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:49:29.414 Cannot find device "nvmf_tgt_br2" 00:49:29.414 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:49:29.414 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:49:29.414 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:49:29.414 Cannot find device "nvmf_tgt_br" 00:49:29.414 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:49:29.414 15:02:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:49:29.414 Cannot find device "nvmf_tgt_br2" 00:49:29.414 15:02:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:49:29.414 15:02:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:49:29.674 15:02:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:49:29.674 15:02:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:49:29.674 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:49:29.674 15:02:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:49:29.674 15:02:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:49:29.674 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:49:29.674 15:02:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:49:29.674 15:02:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:49:29.674 15:02:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:49:29.674 15:02:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:49:29.674 15:02:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:49:29.674 15:02:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:49:29.674 15:02:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:49:29.674 15:02:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:49:29.674 15:02:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:49:29.674 15:02:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:49:29.674 15:02:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:49:29.674 15:02:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:49:29.674 15:02:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:49:29.674 15:02:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:49:29.674 15:02:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:49:29.674 15:02:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:49:29.674 15:02:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:49:29.674 15:02:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:49:29.674 15:02:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:49:29.674 15:02:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:49:29.674 15:02:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:49:29.674 15:02:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:49:29.674 15:02:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:49:29.674 15:02:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:49:29.674 15:02:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:49:29.674 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:49:29.674 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:49:29.674 00:49:29.674 --- 10.0.0.2 ping statistics --- 00:49:29.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:29.674 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:49:29.674 15:02:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:49:29.674 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:49:29.674 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:49:29.674 00:49:29.674 --- 10.0.0.3 ping statistics --- 00:49:29.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:29.674 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:49:29.674 15:02:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:49:29.674 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:49:29.674 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:49:29.674 00:49:29.674 --- 10.0.0.1 ping statistics --- 00:49:29.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:29.674 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:49:29.674 15:02:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:49:29.674 15:02:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:49:29.674 15:02:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:49:29.674 15:02:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:49:29.674 15:02:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:49:29.674 15:02:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:49:29.674 15:02:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:49:29.674 15:02:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:49:29.674 15:02:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:49:29.674 15:02:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:49:29.674 15:02:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:49:29.674 15:02:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:49:29.675 15:02:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:29.675 15:02:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=106526 00:49:29.675 15:02:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:49:29.675 15:02:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 106526 00:49:29.675 15:02:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 106526 ']' 00:49:29.675 15:02:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:49:29.675 15:02:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:49:29.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:49:29.675 15:02:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:49:29.675 15:02:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:49:29.675 15:02:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:29.934 [2024-07-22 15:02:49.344489] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:49:29.934 [2024-07-22 15:02:49.344545] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:49:29.934 [2024-07-22 15:02:49.482642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:29.934 [2024-07-22 15:02:49.532476] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:49:29.934 [2024-07-22 15:02:49.532631] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:49:29.934 [2024-07-22 15:02:49.532666] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:49:29.934 [2024-07-22 15:02:49.532702] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:49:29.934 [2024-07-22 15:02:49.532718] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:49:29.934 [2024-07-22 15:02:49.532764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:49:30.872 15:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:49:30.872 15:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:49:30.872 15:02:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:49:30.872 15:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:49:30.872 15:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:30.872 15:02:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:49:30.872 15:02:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:49:30.872 15:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:30.872 15:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:30.872 [2024-07-22 15:02:50.253480] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:49:30.872 15:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:30.872 15:02:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:49:30.872 15:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:30.872 15:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:30.872 [2024-07-22 15:02:50.265541] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:49:30.872 15:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:30.872 15:02:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:49:30.872 15:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:30.872 15:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:30.872 null0 00:49:30.872 15:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:30.872 15:02:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:49:30.872 15:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:30.872 15:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:30.872 null1 00:49:30.872 15:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:30.872 15:02:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:49:30.872 15:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:30.872 15:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:30.872 15:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:30.872 15:02:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=106577 00:49:30.872 15:02:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:49:30.872 15:02:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 106577 /tmp/host.sock 00:49:30.872 15:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 106577 ']' 00:49:30.872 15:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:49:30.872 15:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:49:30.872 15:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:49:30.872 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:49:30.872 15:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:49:30.872 15:02:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:30.872 [2024-07-22 15:02:50.358089] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:49:30.872 [2024-07-22 15:02:50.358138] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106577 ] 00:49:30.872 [2024-07-22 15:02:50.496054] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:49:31.132 [2024-07-22 15:02:50.546997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:49:31.700 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:49:31.700 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:49:31.700 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:49:31.700 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:49:31.700 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:31.700 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:31.700 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:31.700 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:49:31.700 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:31.700 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:31.701 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:31.701 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:49:31.701 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:49:31.701 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:49:31.701 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:49:31.701 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:49:31.701 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:31.701 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:49:31.701 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:31.701 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:31.701 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:49:31.701 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:49:31.701 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:49:31.701 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:31.701 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:31.701 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:49:31.701 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:49:31.701 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:49:31.701 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:31.960 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:49:31.960 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:49:31.960 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:31.960 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:31.960 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:31.960 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:49:31.960 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:49:31.960 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:31.960 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:49:31.960 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:31.960 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:49:31.960 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:49:31.960 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:31.960 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:49:31.960 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:49:31.960 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:49:31.960 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:49:31.960 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:49:31.960 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:49:31.960 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:31.960 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:31.960 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:31.960 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:49:31.960 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:49:31.960 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:31.960 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:31.960 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:31.960 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:49:31.960 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:49:31.960 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:49:31.960 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:31.960 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:31.960 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:49:31.960 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:49:31.960 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:31.960 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:49:31.960 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:49:31.960 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:49:31.960 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:49:31.960 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:49:31.960 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:31.960 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:31.960 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:49:31.960 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:31.960 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:49:31.960 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:49:31.960 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:31.960 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:31.960 [2024-07-22 15:02:51.587284] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:49:32.219 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:32.219 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:49:32.219 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:49:32.219 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:32.219 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:32.219 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:49:32.219 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:49:32.219 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:49:32.219 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:32.219 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:49:32.219 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:49:32.219 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:49:32.219 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:49:32.219 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:49:32.219 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:49:32.219 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:32.219 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:32.219 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:32.219 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:49:32.219 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:49:32.219 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:49:32.219 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:49:32.219 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:49:32.219 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:49:32.219 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:49:32.219 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:49:32.219 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:49:32.219 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:49:32.219 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:49:32.219 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:32.219 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:32.219 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:32.219 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:49:32.219 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:49:32.219 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:49:32.219 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:49:32.219 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:49:32.220 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:32.220 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:32.220 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:32.220 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:49:32.220 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:49:32.220 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:49:32.220 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:49:32.220 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:49:32.220 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:49:32.220 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:49:32.220 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:49:32.220 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:49:32.220 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:32.220 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:32.220 15:02:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:49:32.220 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:32.220 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:49:32.220 15:02:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:49:32.788 [2024-07-22 15:02:52.242356] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:49:32.788 [2024-07-22 15:02:52.242394] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:49:32.788 [2024-07-22 15:02:52.242407] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:49:32.788 [2024-07-22 15:02:52.328334] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:49:32.788 [2024-07-22 15:02:52.383993] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:49:32.788 [2024-07-22 15:02:52.384024] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:49:33.355 15:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:49:33.355 15:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:49:33.355 15:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:49:33.355 15:02:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:49:33.355 15:02:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:49:33.355 15:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:33.355 15:02:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:49:33.355 15:02:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:49:33.355 15:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:33.355 15:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:33.355 15:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:33.355 15:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:49:33.355 15:02:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:49:33.355 15:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:49:33.355 15:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:49:33.355 15:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:49:33.355 15:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:49:33.355 15:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:49:33.355 15:02:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:49:33.355 15:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:33.355 15:02:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:49:33.355 15:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:33.355 15:02:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:49:33.355 15:02:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:49:33.355 15:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:33.355 15:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:49:33.355 15:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:49:33.355 15:02:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:49:33.355 15:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:49:33.355 15:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:49:33.355 15:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:49:33.355 15:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:49:33.355 15:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:49:33.355 15:02:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:49:33.355 15:02:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:49:33.355 15:02:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:49:33.355 15:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:33.355 15:02:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:49:33.355 15:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:33.355 15:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:33.355 15:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:49:33.355 15:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:49:33.355 15:02:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:49:33.355 15:02:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:49:33.355 15:02:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:49:33.355 15:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:49:33.355 15:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:49:33.355 15:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:49:33.355 15:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:49:33.355 15:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:49:33.642 15:02:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:49:33.642 15:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:33.642 15:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:33.642 15:02:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:49:33.642 15:02:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:33.642 15:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:49:33.642 15:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:49:33.642 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:49:33.642 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:49:33.642 15:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:49:33.642 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:33.642 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:33.642 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:33.643 [2024-07-22 15:02:53.137030] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:49:33.643 [2024-07-22 15:02:53.137425] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:49:33.643 [2024-07-22 15:02:53.137458] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:49:33.643 [2024-07-22 15:02:53.223353] bdev_nvme.c:6908:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:49:33.643 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:33.910 [2024-07-22 15:02:53.282497] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:49:33.910 [2024-07-22 15:02:53.282525] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:49:33.910 [2024-07-22 15:02:53.282530] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:49:33.910 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:49:33.910 15:02:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:49:34.848 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:49:34.848 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:49:34.848 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:49:34.848 15:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:49:34.848 15:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:49:34.848 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:34.848 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:34.848 15:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:49:34.848 15:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:49:34.848 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:34.848 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:49:34.848 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:49:34.848 15:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:49:34.848 15:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:49:34.848 15:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:49:34.848 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:49:34.848 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:49:34.848 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:49:34.848 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:49:34.848 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:49:34.848 15:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:49:34.848 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:34.848 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:34.848 15:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:49:34.848 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:34.848 15:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:49:34.848 15:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:49:34.848 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:49:34.848 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:49:34.848 15:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:49:34.848 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:34.848 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:34.848 [2024-07-22 15:02:54.419255] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:49:34.848 [2024-07-22 15:02:54.419287] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:49:34.848 [2024-07-22 15:02:54.419945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:49:34.848 [2024-07-22 15:02:54.419974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:34.848 [2024-07-22 15:02:54.419983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:49:34.848 [2024-07-22 15:02:54.419990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:34.848 [2024-07-22 15:02:54.419998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:49:34.848 [2024-07-22 15:02:54.420004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:34.848 [2024-07-22 15:02:54.420011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:49:34.848 [2024-07-22 15:02:54.420017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:49:34.848 [2024-07-22 15:02:54.420023] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff09c0 is same with the state(5) to be set 00:49:34.848 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:34.849 15:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:49:34.849 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:49:34.849 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:49:34.849 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:49:34.849 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:49:34.849 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:49:34.849 15:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:49:34.849 15:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:49:34.849 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:34.849 15:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:49:34.849 [2024-07-22 15:02:54.429877] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xff09c0 (9): Bad file descriptor 00:49:34.849 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:34.849 15:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:49:34.849 [2024-07-22 15:02:54.439876] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:49:34.849 [2024-07-22 15:02:54.439977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:34.849 [2024-07-22 15:02:54.439991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff09c0 with addr=10.0.0.2, port=4420 00:49:34.849 [2024-07-22 15:02:54.439998] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff09c0 is same with the state(5) to be set 00:49:34.849 [2024-07-22 15:02:54.440008] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xff09c0 (9): Bad file descriptor 00:49:34.849 [2024-07-22 15:02:54.440018] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:49:34.849 [2024-07-22 15:02:54.440023] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:49:34.849 [2024-07-22 15:02:54.440031] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:49:34.849 [2024-07-22 15:02:54.440040] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:34.849 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:34.849 [2024-07-22 15:02:54.449903] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:49:34.849 [2024-07-22 15:02:54.449973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:34.849 [2024-07-22 15:02:54.449985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff09c0 with addr=10.0.0.2, port=4420 00:49:34.849 [2024-07-22 15:02:54.449992] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff09c0 is same with the state(5) to be set 00:49:34.849 [2024-07-22 15:02:54.450002] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xff09c0 (9): Bad file descriptor 00:49:34.849 [2024-07-22 15:02:54.450010] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:49:34.849 [2024-07-22 15:02:54.450016] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:49:34.849 [2024-07-22 15:02:54.450021] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:49:34.849 [2024-07-22 15:02:54.450031] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:34.849 [2024-07-22 15:02:54.459924] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:49:34.849 [2024-07-22 15:02:54.459979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:34.849 [2024-07-22 15:02:54.459989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff09c0 with addr=10.0.0.2, port=4420 00:49:34.849 [2024-07-22 15:02:54.459995] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff09c0 is same with the state(5) to be set 00:49:34.849 [2024-07-22 15:02:54.460004] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xff09c0 (9): Bad file descriptor 00:49:34.849 [2024-07-22 15:02:54.460013] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:49:34.849 [2024-07-22 15:02:54.460017] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:49:34.849 [2024-07-22 15:02:54.460022] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:49:34.849 [2024-07-22 15:02:54.460031] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:34.849 [2024-07-22 15:02:54.469943] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:49:34.849 [2024-07-22 15:02:54.470028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:34.849 [2024-07-22 15:02:54.470041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff09c0 with addr=10.0.0.2, port=4420 00:49:34.849 [2024-07-22 15:02:54.470048] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff09c0 is same with the state(5) to be set 00:49:34.849 [2024-07-22 15:02:54.470059] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xff09c0 (9): Bad file descriptor 00:49:34.849 [2024-07-22 15:02:54.470068] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:49:34.849 [2024-07-22 15:02:54.470073] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:49:34.849 [2024-07-22 15:02:54.470079] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:49:34.849 [2024-07-22 15:02:54.470100] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:35.108 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:49:35.109 [2024-07-22 15:02:54.479968] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:49:35.109 [2024-07-22 15:02:54.480019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:35.109 [2024-07-22 15:02:54.480030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff09c0 with addr=10.0.0.2, port=4420 00:49:35.109 [2024-07-22 15:02:54.480036] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff09c0 is same with the state(5) to be set 00:49:35.109 [2024-07-22 15:02:54.480047] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xff09c0 (9): Bad file descriptor 00:49:35.109 [2024-07-22 15:02:54.480055] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:49:35.109 [2024-07-22 15:02:54.480061] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:49:35.109 [2024-07-22 15:02:54.480067] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:49:35.109 [2024-07-22 15:02:54.480076] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:49:35.109 [2024-07-22 15:02:54.489980] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:49:35.109 [2024-07-22 15:02:54.490041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:35.109 [2024-07-22 15:02:54.490052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff09c0 with addr=10.0.0.2, port=4420 00:49:35.109 [2024-07-22 15:02:54.490057] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff09c0 is same with the state(5) to be set 00:49:35.109 [2024-07-22 15:02:54.490066] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xff09c0 (9): Bad file descriptor 00:49:35.109 [2024-07-22 15:02:54.490075] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:49:35.109 [2024-07-22 15:02:54.490079] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:49:35.109 [2024-07-22 15:02:54.490085] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:49:35.109 [2024-07-22 15:02:54.490093] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:35.109 [2024-07-22 15:02:54.500002] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:49:35.109 [2024-07-22 15:02:54.500079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:35.109 [2024-07-22 15:02:54.500092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff09c0 with addr=10.0.0.2, port=4420 00:49:35.109 [2024-07-22 15:02:54.500099] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff09c0 is same with the state(5) to be set 00:49:35.109 [2024-07-22 15:02:54.500110] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xff09c0 (9): Bad file descriptor 00:49:35.109 [2024-07-22 15:02:54.500119] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:49:35.109 [2024-07-22 15:02:54.500125] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:49:35.109 [2024-07-22 15:02:54.500131] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:49:35.109 [2024-07-22 15:02:54.500140] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:49:35.109 [2024-07-22 15:02:54.505249] bdev_nvme.c:6771:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:49:35.109 [2024-07-22 15:02:54.505272] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:49:35.109 15:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:49:35.110 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:35.110 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:35.110 15:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:49:35.110 15:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:49:35.110 15:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:49:35.110 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:35.368 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:49:35.368 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:49:35.368 15:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:49:35.368 15:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:49:35.368 15:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:49:35.368 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:49:35.368 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:49:35.368 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:49:35.368 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:49:35.368 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:49:35.368 15:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:49:35.368 15:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:49:35.368 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:35.368 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:35.368 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:35.368 15:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:49:35.368 15:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:49:35.368 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:49:35.368 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:49:35.368 15:02:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:49:35.368 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:35.368 15:02:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:36.303 [2024-07-22 15:02:55.815834] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:49:36.303 [2024-07-22 15:02:55.815872] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:49:36.303 [2024-07-22 15:02:55.815887] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:49:36.303 [2024-07-22 15:02:55.901797] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:49:36.563 [2024-07-22 15:02:55.960740] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:49:36.563 [2024-07-22 15:02:55.960785] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:49:36.563 15:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:36.563 15:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:49:36.563 15:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:49:36.563 15:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:49:36.563 15:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:49:36.563 15:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:49:36.563 15:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:49:36.563 15:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:49:36.563 15:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:49:36.563 15:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:36.563 15:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:36.563 2024/07/22 15:02:55 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:49:36.563 request: 00:49:36.563 { 00:49:36.563 "method": "bdev_nvme_start_discovery", 00:49:36.563 "params": { 00:49:36.563 "name": "nvme", 00:49:36.563 "trtype": "tcp", 00:49:36.563 "traddr": "10.0.0.2", 00:49:36.563 "hostnqn": "nqn.2021-12.io.spdk:test", 00:49:36.563 "adrfam": "ipv4", 00:49:36.563 "trsvcid": "8009", 00:49:36.563 "wait_for_attach": true 00:49:36.563 } 00:49:36.563 } 00:49:36.563 Got JSON-RPC error response 00:49:36.563 GoRPCClient: error on JSON-RPC call 00:49:36.563 15:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:49:36.563 15:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:49:36.563 15:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:49:36.563 15:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:49:36.563 15:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:49:36.563 15:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:49:36.563 15:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:49:36.563 15:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:49:36.563 15:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:36.563 15:02:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:36.563 15:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:49:36.563 15:02:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:49:36.563 15:02:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:36.563 15:02:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:49:36.563 15:02:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:49:36.563 15:02:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:49:36.563 15:02:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:49:36.563 15:02:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:49:36.563 15:02:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:36.563 15:02:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:49:36.563 15:02:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:36.563 15:02:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:36.563 15:02:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:49:36.564 15:02:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:49:36.564 15:02:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:49:36.564 15:02:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:49:36.564 15:02:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:49:36.564 15:02:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:49:36.564 15:02:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:49:36.564 15:02:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:49:36.564 15:02:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:49:36.564 15:02:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:36.564 15:02:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:36.564 2024/07/22 15:02:56 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:49:36.564 request: 00:49:36.564 { 00:49:36.564 "method": "bdev_nvme_start_discovery", 00:49:36.564 "params": { 00:49:36.564 "name": "nvme_second", 00:49:36.564 "trtype": "tcp", 00:49:36.564 "traddr": "10.0.0.2", 00:49:36.564 "hostnqn": "nqn.2021-12.io.spdk:test", 00:49:36.564 "adrfam": "ipv4", 00:49:36.564 "trsvcid": "8009", 00:49:36.564 "wait_for_attach": true 00:49:36.564 } 00:49:36.564 } 00:49:36.564 Got JSON-RPC error response 00:49:36.564 GoRPCClient: error on JSON-RPC call 00:49:36.564 15:02:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:49:36.564 15:02:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:49:36.564 15:02:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:49:36.564 15:02:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:49:36.564 15:02:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:49:36.564 15:02:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:49:36.564 15:02:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:49:36.564 15:02:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:49:36.564 15:02:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:49:36.564 15:02:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:49:36.564 15:02:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:36.564 15:02:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:36.564 15:02:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:36.564 15:02:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:49:36.564 15:02:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:49:36.564 15:02:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:49:36.564 15:02:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:36.564 15:02:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:36.564 15:02:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:49:36.564 15:02:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:49:36.564 15:02:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:49:36.824 15:02:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:36.824 15:02:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:49:36.824 15:02:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:49:36.824 15:02:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:49:36.824 15:02:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:49:36.824 15:02:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:49:36.824 15:02:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:49:36.824 15:02:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:49:36.824 15:02:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:49:36.824 15:02:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:49:36.824 15:02:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:36.824 15:02:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:37.762 [2024-07-22 15:02:57.236401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:37.762 [2024-07-22 15:02:57.236459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102b160 with addr=10.0.0.2, port=8010 00:49:37.762 [2024-07-22 15:02:57.236476] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:49:37.762 [2024-07-22 15:02:57.236483] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:49:37.762 [2024-07-22 15:02:57.236489] bdev_nvme.c:7046:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:49:38.700 [2024-07-22 15:02:58.234481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:49:38.700 [2024-07-22 15:02:58.234536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x102b160 with addr=10.0.0.2, port=8010 00:49:38.700 [2024-07-22 15:02:58.234554] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:49:38.700 [2024-07-22 15:02:58.234560] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:49:38.700 [2024-07-22 15:02:58.234565] bdev_nvme.c:7046:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:49:39.636 [2024-07-22 15:02:59.232447] bdev_nvme.c:7027:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:49:39.636 2024/07/22 15:02:59 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:49:39.636 request: 00:49:39.636 { 00:49:39.636 "method": "bdev_nvme_start_discovery", 00:49:39.636 "params": { 00:49:39.636 "name": "nvme_second", 00:49:39.636 "trtype": "tcp", 00:49:39.636 "traddr": "10.0.0.2", 00:49:39.636 "hostnqn": "nqn.2021-12.io.spdk:test", 00:49:39.636 "adrfam": "ipv4", 00:49:39.636 "trsvcid": "8010", 00:49:39.636 "attach_timeout_ms": 3000 00:49:39.636 } 00:49:39.636 } 00:49:39.636 Got JSON-RPC error response 00:49:39.636 GoRPCClient: error on JSON-RPC call 00:49:39.636 15:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:49:39.636 15:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:49:39.636 15:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:49:39.636 15:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:49:39.636 15:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:49:39.636 15:02:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:49:39.636 15:02:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:49:39.636 15:02:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:49:39.636 15:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:49:39.636 15:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:39.636 15:02:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:49:39.636 15:02:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:49:39.636 15:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:49:39.896 15:02:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:49:39.896 15:02:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:49:39.896 15:02:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 106577 00:49:39.896 15:02:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:49:39.896 15:02:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:49:39.896 15:02:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:49:39.896 15:02:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:49:39.896 15:02:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:49:39.896 15:02:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:49:39.896 15:02:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:49:39.896 rmmod nvme_tcp 00:49:39.896 rmmod nvme_fabrics 00:49:39.896 rmmod nvme_keyring 00:49:39.896 15:02:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:49:39.896 15:02:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:49:39.896 15:02:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:49:39.896 15:02:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 106526 ']' 00:49:39.896 15:02:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 106526 00:49:39.896 15:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # '[' -z 106526 ']' 00:49:39.896 15:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@950 -- # kill -0 106526 00:49:39.896 15:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # uname 00:49:39.896 15:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:49:39.896 15:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 106526 00:49:39.896 15:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:49:39.896 15:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:49:39.896 killing process with pid 106526 00:49:39.896 15:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 106526' 00:49:39.896 15:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # kill 106526 00:49:39.896 15:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@970 -- # wait 106526 00:49:40.156 15:02:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:49:40.156 15:02:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:49:40.156 15:02:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:49:40.156 15:02:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:49:40.156 15:02:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:49:40.156 15:02:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:40.156 15:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:49:40.156 15:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:40.156 15:02:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:49:40.156 ************************************ 00:49:40.156 END TEST nvmf_host_discovery 00:49:40.156 ************************************ 00:49:40.156 00:49:40.156 real 0m10.930s 00:49:40.156 user 0m21.276s 00:49:40.156 sys 0m1.713s 00:49:40.156 15:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:49:40.156 15:02:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:49:40.156 15:02:59 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:49:40.156 15:02:59 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:49:40.156 15:02:59 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:49:40.156 15:02:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:49:40.156 ************************************ 00:49:40.156 START TEST nvmf_host_multipath_status 00:49:40.156 ************************************ 00:49:40.156 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:49:40.416 * Looking for test storage... 00:49:40.416 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:49:40.416 Cannot find device "nvmf_tgt_br" 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:49:40.416 Cannot find device "nvmf_tgt_br2" 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:49:40.416 Cannot find device "nvmf_tgt_br" 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:49:40.416 15:02:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:49:40.416 Cannot find device "nvmf_tgt_br2" 00:49:40.416 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:49:40.416 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:49:40.677 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:49:40.677 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:49:40.677 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:49:40.677 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:49:40.677 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:49:40.677 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:49:40.677 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:49:40.677 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:49:40.677 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:49:40.677 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:49:40.677 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:49:40.677 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:49:40.677 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:49:40.677 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:49:40.677 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:49:40.677 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:49:40.677 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:49:40.677 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:49:40.677 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:49:40.677 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:49:40.677 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:49:40.677 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:49:40.677 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:49:40.677 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:49:40.677 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:49:40.677 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:49:40.677 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:49:40.677 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:49:40.677 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:49:40.677 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:49:40.677 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:49:40.677 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:49:40.677 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:49:40.677 00:49:40.677 --- 10.0.0.2 ping statistics --- 00:49:40.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:40.677 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:49:40.677 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:49:40.677 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:49:40.677 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:49:40.677 00:49:40.677 --- 10.0.0.3 ping statistics --- 00:49:40.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:40.677 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:49:40.677 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:49:40.677 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:49:40.677 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:49:40.677 00:49:40.677 --- 10.0.0.1 ping statistics --- 00:49:40.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:49:40.677 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:49:40.677 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:49:40.677 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:49:40.677 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:49:40.677 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:49:40.677 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:49:40.677 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:49:40.677 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:49:40.677 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:49:40.677 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:49:40.677 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:49:40.677 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:49:40.677 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:49:40.677 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:49:40.677 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:49:40.677 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=107059 00:49:40.677 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 107059 00:49:40.677 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 107059 ']' 00:49:40.677 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:49:40.677 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:49:40.677 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:49:40.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:49:40.677 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:49:40.677 15:03:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:49:40.677 [2024-07-22 15:03:00.281630] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:49:40.677 [2024-07-22 15:03:00.281708] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:49:40.939 [2024-07-22 15:03:00.419187] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:49:40.939 [2024-07-22 15:03:00.463821] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:49:40.939 [2024-07-22 15:03:00.463888] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:49:40.939 [2024-07-22 15:03:00.463894] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:49:40.939 [2024-07-22 15:03:00.463899] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:49:40.939 [2024-07-22 15:03:00.463903] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:49:40.939 [2024-07-22 15:03:00.465122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:49:40.939 [2024-07-22 15:03:00.465123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:49:41.507 15:03:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:49:41.507 15:03:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:49:41.507 15:03:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:49:41.507 15:03:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:49:41.507 15:03:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:49:41.768 15:03:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:49:41.768 15:03:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=107059 00:49:41.768 15:03:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:49:41.768 [2024-07-22 15:03:01.309785] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:49:41.768 15:03:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:49:42.027 Malloc0 00:49:42.027 15:03:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:49:42.287 15:03:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:49:42.287 15:03:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:49:42.546 [2024-07-22 15:03:02.056294] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:49:42.546 15:03:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:49:42.806 [2024-07-22 15:03:02.228023] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:49:42.806 15:03:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=107157 00:49:42.806 15:03:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:49:42.806 15:03:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:49:42.806 15:03:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 107157 /var/tmp/bdevperf.sock 00:49:42.806 15:03:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 107157 ']' 00:49:42.806 15:03:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:49:42.806 15:03:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:49:42.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:49:42.806 15:03:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:49:42.806 15:03:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:49:42.806 15:03:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:49:43.773 15:03:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:49:43.773 15:03:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:49:43.773 15:03:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:49:43.773 15:03:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:49:44.033 Nvme0n1 00:49:44.033 15:03:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:49:44.600 Nvme0n1 00:49:44.600 15:03:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:49:44.600 15:03:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:49:46.507 15:03:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:49:46.507 15:03:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:49:46.767 15:03:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:49:46.767 15:03:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:49:48.149 15:03:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:49:48.149 15:03:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:49:48.149 15:03:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:49:48.149 15:03:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:49:48.149 15:03:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:49:48.149 15:03:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:49:48.149 15:03:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:49:48.149 15:03:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:49:48.149 15:03:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:49:48.149 15:03:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:49:48.149 15:03:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:49:48.149 15:03:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:49:48.409 15:03:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:49:48.409 15:03:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:49:48.409 15:03:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:49:48.409 15:03:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:49:48.668 15:03:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:49:48.668 15:03:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:49:48.668 15:03:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:49:48.668 15:03:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:49:48.928 15:03:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:49:48.928 15:03:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:49:48.928 15:03:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:49:48.928 15:03:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:49:48.928 15:03:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:49:48.928 15:03:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:49:48.928 15:03:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:49:49.188 15:03:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:49:49.448 15:03:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:49:50.386 15:03:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:49:50.386 15:03:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:49:50.386 15:03:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:49:50.386 15:03:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:49:50.646 15:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:49:50.646 15:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:49:50.646 15:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:49:50.646 15:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:49:50.906 15:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:49:50.906 15:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:49:50.906 15:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:49:50.906 15:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:49:50.906 15:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:49:50.906 15:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:49:50.906 15:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:49:50.906 15:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:49:51.169 15:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:49:51.169 15:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:49:51.169 15:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:49:51.169 15:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:49:51.429 15:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:49:51.429 15:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:49:51.429 15:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:49:51.429 15:03:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:49:51.689 15:03:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:49:51.689 15:03:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:49:51.689 15:03:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:49:51.689 15:03:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:49:51.948 15:03:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:49:52.887 15:03:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:49:52.887 15:03:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:49:52.887 15:03:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:49:52.887 15:03:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:49:53.147 15:03:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:49:53.147 15:03:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:49:53.147 15:03:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:49:53.147 15:03:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:49:53.406 15:03:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:49:53.406 15:03:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:49:53.406 15:03:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:49:53.406 15:03:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:49:53.665 15:03:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:49:53.665 15:03:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:49:53.665 15:03:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:49:53.665 15:03:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:49:53.665 15:03:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:49:53.665 15:03:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:49:53.665 15:03:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:49:53.665 15:03:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:49:53.923 15:03:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:49:53.923 15:03:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:49:53.923 15:03:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:49:53.923 15:03:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:49:54.183 15:03:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:49:54.183 15:03:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:49:54.183 15:03:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:49:54.441 15:03:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:49:54.441 15:03:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:49:55.818 15:03:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:49:55.818 15:03:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:49:55.818 15:03:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:49:55.818 15:03:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:49:55.818 15:03:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:49:55.818 15:03:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:49:55.818 15:03:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:49:55.818 15:03:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:49:55.818 15:03:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:49:55.818 15:03:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:49:55.818 15:03:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:49:55.818 15:03:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:49:56.078 15:03:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:49:56.078 15:03:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:49:56.078 15:03:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:49:56.078 15:03:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:49:56.337 15:03:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:49:56.337 15:03:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:49:56.337 15:03:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:49:56.337 15:03:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:49:56.596 15:03:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:49:56.596 15:03:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:49:56.596 15:03:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:49:56.596 15:03:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:49:56.596 15:03:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:49:56.596 15:03:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:49:56.596 15:03:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:49:56.855 15:03:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:49:57.115 15:03:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:49:58.053 15:03:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:49:58.053 15:03:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:49:58.053 15:03:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:49:58.053 15:03:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:49:58.313 15:03:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:49:58.313 15:03:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:49:58.313 15:03:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:49:58.313 15:03:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:49:58.572 15:03:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:49:58.572 15:03:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:49:58.572 15:03:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:49:58.572 15:03:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:49:58.572 15:03:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:49:58.572 15:03:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:49:58.572 15:03:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:49:58.572 15:03:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:49:58.831 15:03:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:49:58.831 15:03:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:49:58.831 15:03:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:49:58.831 15:03:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:49:59.092 15:03:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:49:59.092 15:03:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:49:59.092 15:03:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:49:59.092 15:03:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:49:59.351 15:03:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:49:59.351 15:03:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:49:59.351 15:03:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:49:59.351 15:03:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:49:59.610 15:03:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:50:00.548 15:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:50:00.548 15:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:50:00.548 15:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:00.548 15:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:50:00.808 15:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:50:00.808 15:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:50:00.808 15:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:00.808 15:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:50:01.066 15:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:01.066 15:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:50:01.066 15:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:01.066 15:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:50:01.066 15:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:01.066 15:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:50:01.067 15:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:01.067 15:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:50:01.326 15:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:01.327 15:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:50:01.327 15:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:50:01.327 15:03:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:01.587 15:03:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:50:01.587 15:03:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:50:01.587 15:03:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:01.587 15:03:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:50:01.847 15:03:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:01.847 15:03:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:50:01.847 15:03:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:50:01.847 15:03:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:50:02.106 15:03:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:50:02.366 15:03:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:50:03.306 15:03:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:50:03.306 15:03:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:50:03.306 15:03:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:03.306 15:03:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:50:03.565 15:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:03.565 15:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:50:03.565 15:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:03.565 15:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:50:03.824 15:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:03.824 15:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:50:03.824 15:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:50:03.824 15:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:03.824 15:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:03.824 15:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:50:03.824 15:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:03.824 15:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:50:04.083 15:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:04.083 15:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:50:04.083 15:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:50:04.083 15:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:04.343 15:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:04.343 15:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:50:04.343 15:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:04.343 15:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:50:04.603 15:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:04.603 15:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:50:04.603 15:03:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:50:04.603 15:03:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:50:04.862 15:03:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:50:05.801 15:03:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:50:05.801 15:03:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:50:05.801 15:03:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:05.801 15:03:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:50:06.061 15:03:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:50:06.061 15:03:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:50:06.061 15:03:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:06.061 15:03:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:50:06.321 15:03:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:06.321 15:03:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:50:06.321 15:03:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:06.321 15:03:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:50:06.321 15:03:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:06.321 15:03:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:50:06.321 15:03:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:50:06.321 15:03:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:06.580 15:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:06.580 15:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:50:06.580 15:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:06.581 15:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:50:06.840 15:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:06.840 15:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:50:06.840 15:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:06.840 15:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:50:07.099 15:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:07.099 15:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:50:07.099 15:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:50:07.099 15:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:50:07.359 15:03:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:50:08.299 15:03:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:50:08.299 15:03:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:50:08.299 15:03:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:08.299 15:03:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:50:08.565 15:03:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:08.565 15:03:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:50:08.565 15:03:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:08.565 15:03:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:50:08.836 15:03:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:08.836 15:03:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:50:08.836 15:03:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:08.836 15:03:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:50:09.096 15:03:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:09.096 15:03:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:50:09.096 15:03:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:09.096 15:03:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:50:09.096 15:03:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:09.096 15:03:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:50:09.096 15:03:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:09.096 15:03:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:50:09.356 15:03:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:09.356 15:03:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:50:09.356 15:03:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:50:09.356 15:03:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:09.615 15:03:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:09.615 15:03:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:50:09.615 15:03:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:50:09.875 15:03:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:50:09.875 15:03:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:50:11.252 15:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:50:11.252 15:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:50:11.252 15:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:11.252 15:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:50:11.252 15:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:11.252 15:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:50:11.252 15:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:11.252 15:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:50:11.252 15:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:50:11.252 15:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:50:11.252 15:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:50:11.252 15:03:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:11.511 15:03:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:11.511 15:03:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:50:11.511 15:03:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:11.511 15:03:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:50:11.771 15:03:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:11.771 15:03:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:50:11.771 15:03:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:11.771 15:03:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:50:12.030 15:03:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:50:12.030 15:03:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:50:12.030 15:03:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:50:12.030 15:03:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:50:12.030 15:03:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:50:12.030 15:03:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 107157 00:50:12.030 15:03:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 107157 ']' 00:50:12.030 15:03:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 107157 00:50:12.030 15:03:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:50:12.030 15:03:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:50:12.030 15:03:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 107157 00:50:12.030 killing process with pid 107157 00:50:12.030 15:03:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:50:12.030 15:03:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:50:12.030 15:03:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 107157' 00:50:12.030 15:03:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 107157 00:50:12.030 15:03:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 107157 00:50:12.292 Connection closed with partial response: 00:50:12.292 00:50:12.292 00:50:12.292 15:03:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 107157 00:50:12.292 15:03:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:50:12.292 [2024-07-22 15:03:02.299276] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:50:12.292 [2024-07-22 15:03:02.299358] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107157 ] 00:50:12.292 [2024-07-22 15:03:02.438553] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:12.292 [2024-07-22 15:03:02.486473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:50:12.292 Running I/O for 90 seconds... 00:50:12.292 [2024-07-22 15:03:16.381507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.292 [2024-07-22 15:03:16.381576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:50:12.292 [2024-07-22 15:03:16.381799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:15048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.292 [2024-07-22 15:03:16.381817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:50:12.292 [2024-07-22 15:03:16.381836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:15056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.292 [2024-07-22 15:03:16.381846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:50:12.292 [2024-07-22 15:03:16.381862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.292 [2024-07-22 15:03:16.381871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:50:12.292 [2024-07-22 15:03:16.381888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:15072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.292 [2024-07-22 15:03:16.381897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:50:12.292 [2024-07-22 15:03:16.381913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:15080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.292 [2024-07-22 15:03:16.381923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:50:12.292 [2024-07-22 15:03:16.381938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.292 [2024-07-22 15:03:16.381948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:50:12.292 [2024-07-22 15:03:16.381964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.292 [2024-07-22 15:03:16.381973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:50:12.292 [2024-07-22 15:03:16.381989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:15104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.292 [2024-07-22 15:03:16.381998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:50:12.292 [2024-07-22 15:03:16.382014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:15112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.292 [2024-07-22 15:03:16.382023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:50:12.292 [2024-07-22 15:03:16.382039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:15120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.292 [2024-07-22 15:03:16.382073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:50:12.292 [2024-07-22 15:03:16.382089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.292 [2024-07-22 15:03:16.382099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:50:12.292 [2024-07-22 15:03:16.382114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.292 [2024-07-22 15:03:16.382123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:50:12.292 [2024-07-22 15:03:16.382139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.292 [2024-07-22 15:03:16.382148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:50:12.293 [2024-07-22 15:03:16.382164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:15152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.293 [2024-07-22 15:03:16.382173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:50:12.293 [2024-07-22 15:03:16.382190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.293 [2024-07-22 15:03:16.382200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:50:12.293 [2024-07-22 15:03:16.382216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.293 [2024-07-22 15:03:16.382228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:50:12.293 [2024-07-22 15:03:16.382244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.293 [2024-07-22 15:03:16.382254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:50:12.293 [2024-07-22 15:03:16.382269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.293 [2024-07-22 15:03:16.382279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:50:12.293 [2024-07-22 15:03:16.382295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.293 [2024-07-22 15:03:16.382305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:50:12.293 [2024-07-22 15:03:16.382321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.293 [2024-07-22 15:03:16.382330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:50:12.293 [2024-07-22 15:03:16.382346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.293 [2024-07-22 15:03:16.382355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:50:12.293 [2024-07-22 15:03:16.382371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.293 [2024-07-22 15:03:16.382381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:50:12.293 [2024-07-22 15:03:16.382405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.293 [2024-07-22 15:03:16.382415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:50:12.293 [2024-07-22 15:03:16.382430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.293 [2024-07-22 15:03:16.382440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:50:12.293 [2024-07-22 15:03:16.382456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:15240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.293 [2024-07-22 15:03:16.382466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:50:12.293 [2024-07-22 15:03:16.382481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.293 [2024-07-22 15:03:16.382491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:50:12.293 [2024-07-22 15:03:16.382507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.293 [2024-07-22 15:03:16.382517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:50:12.293 [2024-07-22 15:03:16.382533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.293 [2024-07-22 15:03:16.382543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:50:12.293 [2024-07-22 15:03:16.382559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.293 [2024-07-22 15:03:16.382568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:50:12.293 [2024-07-22 15:03:16.382584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.293 [2024-07-22 15:03:16.382595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:50:12.293 [2024-07-22 15:03:16.382612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.293 [2024-07-22 15:03:16.382622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:50:12.293 [2024-07-22 15:03:16.382639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.293 [2024-07-22 15:03:16.382649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:50:12.293 [2024-07-22 15:03:16.384040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.293 [2024-07-22 15:03:16.384060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:50:12.293 [2024-07-22 15:03:16.384082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.293 [2024-07-22 15:03:16.384091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:50:12.293 [2024-07-22 15:03:16.384119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:14680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.293 [2024-07-22 15:03:16.384129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:50:12.293 [2024-07-22 15:03:16.384149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.293 [2024-07-22 15:03:16.384158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:50:12.293 [2024-07-22 15:03:16.384176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.293 [2024-07-22 15:03:16.384185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:50:12.293 [2024-07-22 15:03:16.384204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.293 [2024-07-22 15:03:16.384213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:50:12.293 [2024-07-22 15:03:16.384232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.293 [2024-07-22 15:03:16.384241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:50:12.293 [2024-07-22 15:03:16.384260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.293 [2024-07-22 15:03:16.384270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:50:12.293 [2024-07-22 15:03:16.384289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.293 [2024-07-22 15:03:16.384297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:50:12.293 [2024-07-22 15:03:16.384316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.293 [2024-07-22 15:03:16.384325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:50:12.293 [2024-07-22 15:03:16.384345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.293 [2024-07-22 15:03:16.384354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:50:12.293 [2024-07-22 15:03:16.384373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.293 [2024-07-22 15:03:16.384382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:50:12.293 [2024-07-22 15:03:16.384400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.293 [2024-07-22 15:03:16.384409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:50:12.293 [2024-07-22 15:03:16.384428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:14768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.293 [2024-07-22 15:03:16.384437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:50:12.293 [2024-07-22 15:03:16.384456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.293 [2024-07-22 15:03:16.384470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:50:12.293 [2024-07-22 15:03:16.384489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.293 [2024-07-22 15:03:16.384499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:12.293 [2024-07-22 15:03:16.384576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.293 [2024-07-22 15:03:16.384587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:12.293 [2024-07-22 15:03:16.384617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.293 [2024-07-22 15:03:16.384627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:12.293 [2024-07-22 15:03:16.384647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.294 [2024-07-22 15:03:16.384656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:50:12.294 [2024-07-22 15:03:16.384686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.294 [2024-07-22 15:03:16.384696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:50:12.294 [2024-07-22 15:03:16.384716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.294 [2024-07-22 15:03:16.384725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:50:12.294 [2024-07-22 15:03:16.384745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.294 [2024-07-22 15:03:16.384754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:50:12.294 [2024-07-22 15:03:16.384774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.294 [2024-07-22 15:03:16.384784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:50:12.294 [2024-07-22 15:03:16.384803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.294 [2024-07-22 15:03:16.384812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:50:12.294 [2024-07-22 15:03:16.384833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:14856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.294 [2024-07-22 15:03:16.384842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:50:12.294 [2024-07-22 15:03:16.384862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.294 [2024-07-22 15:03:16.384871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:50:12.294 [2024-07-22 15:03:16.384891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.294 [2024-07-22 15:03:16.384906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:50:12.294 [2024-07-22 15:03:16.384926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.294 [2024-07-22 15:03:16.384935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:50:12.294 [2024-07-22 15:03:16.384955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.294 [2024-07-22 15:03:16.384963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:50:12.294 [2024-07-22 15:03:16.384984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.294 [2024-07-22 15:03:16.384994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:50:12.294 [2024-07-22 15:03:16.385015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.294 [2024-07-22 15:03:16.385024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:50:12.294 [2024-07-22 15:03:16.385044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.294 [2024-07-22 15:03:16.385053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:50:12.294 [2024-07-22 15:03:16.385074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.294 [2024-07-22 15:03:16.385082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:50:12.294 [2024-07-22 15:03:16.385102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.294 [2024-07-22 15:03:16.385111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:50:12.294 [2024-07-22 15:03:16.385131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.294 [2024-07-22 15:03:16.385140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:50:12.294 [2024-07-22 15:03:16.385160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.294 [2024-07-22 15:03:16.385168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:50:12.294 [2024-07-22 15:03:16.385188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.294 [2024-07-22 15:03:16.385197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:50:12.294 [2024-07-22 15:03:16.385217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:14960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.294 [2024-07-22 15:03:16.385226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:50:12.294 [2024-07-22 15:03:16.385246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:14968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.294 [2024-07-22 15:03:16.385255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:50:12.294 [2024-07-22 15:03:16.385280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.294 [2024-07-22 15:03:16.385289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:50:12.294 [2024-07-22 15:03:16.385308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.294 [2024-07-22 15:03:16.385317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:50:12.294 [2024-07-22 15:03:16.385343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:14992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.294 [2024-07-22 15:03:16.385352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:50:12.294 [2024-07-22 15:03:16.385372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.294 [2024-07-22 15:03:16.385381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:50:12.294 [2024-07-22 15:03:16.385401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.294 [2024-07-22 15:03:16.385411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:50:12.294 [2024-07-22 15:03:16.385432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.294 [2024-07-22 15:03:16.385442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:50:12.294 [2024-07-22 15:03:16.385461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.294 [2024-07-22 15:03:16.385470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:50:12.294 [2024-07-22 15:03:16.385491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:15032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.294 [2024-07-22 15:03:16.385500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:50:12.294 [2024-07-22 15:03:29.422526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:47360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.294 [2024-07-22 15:03:29.422589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:50:12.294 [2024-07-22 15:03:29.422636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:47552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.294 [2024-07-22 15:03:29.422648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:50:12.294 [2024-07-22 15:03:29.422676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:47568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.294 [2024-07-22 15:03:29.422688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:50:12.294 [2024-07-22 15:03:29.422705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:47584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.294 [2024-07-22 15:03:29.422716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:50:12.294 [2024-07-22 15:03:29.422754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:47600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.294 [2024-07-22 15:03:29.422765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:50:12.294 [2024-07-22 15:03:29.422781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:47616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.294 [2024-07-22 15:03:29.422792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:50:12.294 [2024-07-22 15:03:29.422808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:47632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.294 [2024-07-22 15:03:29.422819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:50:12.294 [2024-07-22 15:03:29.422836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:47648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.294 [2024-07-22 15:03:29.422846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:50:12.294 [2024-07-22 15:03:29.422863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:47664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.294 [2024-07-22 15:03:29.422873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:50:12.294 [2024-07-22 15:03:29.422889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:47680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.295 [2024-07-22 15:03:29.422899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:50:12.295 [2024-07-22 15:03:29.422916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:47696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.295 [2024-07-22 15:03:29.422926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:50:12.295 [2024-07-22 15:03:29.422943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:47712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.295 [2024-07-22 15:03:29.422953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:50:12.295 [2024-07-22 15:03:29.422970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:47728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.295 [2024-07-22 15:03:29.422981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:50:12.295 [2024-07-22 15:03:29.422997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:47744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.295 [2024-07-22 15:03:29.423008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:50:12.295 [2024-07-22 15:03:29.423024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:47760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.295 [2024-07-22 15:03:29.423034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:50:12.295 [2024-07-22 15:03:29.423051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:47776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.295 [2024-07-22 15:03:29.423061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:50:12.295 [2024-07-22 15:03:29.423078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:47368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.295 [2024-07-22 15:03:29.423095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:50:12.295 [2024-07-22 15:03:29.424044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:47784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.295 [2024-07-22 15:03:29.424069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:50:12.295 [2024-07-22 15:03:29.424088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:47800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.295 [2024-07-22 15:03:29.424098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:50:12.295 [2024-07-22 15:03:29.424114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:47816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.295 [2024-07-22 15:03:29.424125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:50:12.295 [2024-07-22 15:03:29.424141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:47832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.295 [2024-07-22 15:03:29.424151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:50:12.295 [2024-07-22 15:03:29.424166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:47848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.295 [2024-07-22 15:03:29.424176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:50:12.295 [2024-07-22 15:03:29.424191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:47864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.295 [2024-07-22 15:03:29.424201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:50:12.295 [2024-07-22 15:03:29.424216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:47880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.295 [2024-07-22 15:03:29.424226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:50:12.295 [2024-07-22 15:03:29.424244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:47896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.295 [2024-07-22 15:03:29.424254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:50:12.295 [2024-07-22 15:03:29.424269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:47912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.295 [2024-07-22 15:03:29.424279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:12.295 [2024-07-22 15:03:29.424295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:47928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.295 [2024-07-22 15:03:29.424305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:50:12.295 [2024-07-22 15:03:29.424320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:47944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.295 [2024-07-22 15:03:29.424330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:50:12.295 [2024-07-22 15:03:29.424346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.295 [2024-07-22 15:03:29.424366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:50:12.295 [2024-07-22 15:03:29.424392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:47976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.295 [2024-07-22 15:03:29.424401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:50:12.295 [2024-07-22 15:03:29.424415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:47992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.295 [2024-07-22 15:03:29.424424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:50:12.295 [2024-07-22 15:03:29.424438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:48008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.295 [2024-07-22 15:03:29.424447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:50:12.295 [2024-07-22 15:03:29.424460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:48024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.295 [2024-07-22 15:03:29.424469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:50:12.295 [2024-07-22 15:03:29.424483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:48040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.295 [2024-07-22 15:03:29.424492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:50:12.295 [2024-07-22 15:03:29.424506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:48056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.295 [2024-07-22 15:03:29.424515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:50:12.295 [2024-07-22 15:03:29.424529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:48072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.295 [2024-07-22 15:03:29.424538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:50:12.295 [2024-07-22 15:03:29.424552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:47392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.295 [2024-07-22 15:03:29.424561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:50:12.295 [2024-07-22 15:03:29.424574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:47424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.295 [2024-07-22 15:03:29.424583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:50:12.295 [2024-07-22 15:03:29.424597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:47456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.295 [2024-07-22 15:03:29.424615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:50:12.295 [2024-07-22 15:03:29.424629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:47488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.295 [2024-07-22 15:03:29.424655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:50:12.295 [2024-07-22 15:03:29.424672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:47528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.295 [2024-07-22 15:03:29.424691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:50:12.295 [2024-07-22 15:03:29.424717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:48088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.295 [2024-07-22 15:03:29.424727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:50:12.295 [2024-07-22 15:03:29.424745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:48104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.295 [2024-07-22 15:03:29.424755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:50:12.295 [2024-07-22 15:03:29.424772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:48120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.295 [2024-07-22 15:03:29.424782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:50:12.295 [2024-07-22 15:03:29.424799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:48136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.295 [2024-07-22 15:03:29.424809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:50:12.295 [2024-07-22 15:03:29.424826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:48152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.295 [2024-07-22 15:03:29.424836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:50:12.295 [2024-07-22 15:03:29.424853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:48168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.295 [2024-07-22 15:03:29.424864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:50:12.295 [2024-07-22 15:03:29.424881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:48184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.295 [2024-07-22 15:03:29.424892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:50:12.296 [2024-07-22 15:03:29.424909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:48200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.296 [2024-07-22 15:03:29.424919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:50:12.296 [2024-07-22 15:03:29.424943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:48216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.296 [2024-07-22 15:03:29.424954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:50:12.296 [2024-07-22 15:03:29.424971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:48232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.296 [2024-07-22 15:03:29.424982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:50:12.296 [2024-07-22 15:03:29.424999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:48248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.296 [2024-07-22 15:03:29.425009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:50:12.296 [2024-07-22 15:03:29.425026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:48264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.296 [2024-07-22 15:03:29.425036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:50:12.296 [2024-07-22 15:03:29.425059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:47416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.296 [2024-07-22 15:03:29.425070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:50:12.296 [2024-07-22 15:03:29.425087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:47448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.296 [2024-07-22 15:03:29.425097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:50:12.296 [2024-07-22 15:03:29.425115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.296 [2024-07-22 15:03:29.425125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:50:12.296 [2024-07-22 15:03:29.425142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:47512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.296 [2024-07-22 15:03:29.425152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:50:12.296 [2024-07-22 15:03:29.425169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:47536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:50:12.296 [2024-07-22 15:03:29.425180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:50:12.296 [2024-07-22 15:03:29.425196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:48280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.296 [2024-07-22 15:03:29.425207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:50:12.296 [2024-07-22 15:03:29.425223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:48296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.296 [2024-07-22 15:03:29.425234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:50:12.296 [2024-07-22 15:03:29.425252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:48312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:50:12.296 [2024-07-22 15:03:29.425263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:50:12.296 Received shutdown signal, test time was about 27.625729 seconds 00:50:12.296 00:50:12.296 Latency(us) 00:50:12.296 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:50:12.296 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:50:12.296 Verification LBA range: start 0x0 length 0x4000 00:50:12.296 Nvme0n1 : 27.63 11268.80 44.02 0.00 0.00 11337.63 81.38 3018433.62 00:50:12.296 =================================================================================================================== 00:50:12.296 Total : 11268.80 44.02 0.00 0.00 11337.63 81.38 3018433.62 00:50:12.296 15:03:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:50:12.555 15:03:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:50:12.555 15:03:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:50:12.555 15:03:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:50:12.555 15:03:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:50:12.555 15:03:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:50:12.555 15:03:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:50:12.555 15:03:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:50:12.555 15:03:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:50:12.555 15:03:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:50:12.555 rmmod nvme_tcp 00:50:12.555 rmmod nvme_fabrics 00:50:12.555 rmmod nvme_keyring 00:50:12.555 15:03:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:50:12.555 15:03:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:50:12.555 15:03:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:50:12.555 15:03:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 107059 ']' 00:50:12.555 15:03:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 107059 00:50:12.555 15:03:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 107059 ']' 00:50:12.555 15:03:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 107059 00:50:12.555 15:03:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:50:12.555 15:03:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:50:12.555 15:03:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 107059 00:50:12.814 killing process with pid 107059 00:50:12.814 15:03:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:50:12.814 15:03:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:50:12.814 15:03:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 107059' 00:50:12.814 15:03:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 107059 00:50:12.814 15:03:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 107059 00:50:12.814 15:03:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:50:12.814 15:03:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:50:12.814 15:03:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:50:12.814 15:03:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:50:12.814 15:03:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:50:12.814 15:03:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:50:12.814 15:03:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:50:12.814 15:03:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:50:12.814 15:03:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:50:13.073 ************************************ 00:50:13.073 END TEST nvmf_host_multipath_status 00:50:13.073 ************************************ 00:50:13.073 00:50:13.073 real 0m32.695s 00:50:13.073 user 1m45.519s 00:50:13.073 sys 0m7.319s 00:50:13.073 15:03:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:50:13.073 15:03:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:50:13.073 15:03:32 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:50:13.073 15:03:32 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:50:13.073 15:03:32 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:50:13.073 15:03:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:50:13.073 ************************************ 00:50:13.073 START TEST nvmf_discovery_remove_ifc 00:50:13.073 ************************************ 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:50:13.073 * Looking for test storage... 00:50:13.073 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:50:13.073 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:50:13.074 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:50:13.398 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:50:13.398 Cannot find device "nvmf_tgt_br" 00:50:13.398 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:50:13.398 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:50:13.398 Cannot find device "nvmf_tgt_br2" 00:50:13.398 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:50:13.398 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:50:13.398 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:50:13.398 Cannot find device "nvmf_tgt_br" 00:50:13.398 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:50:13.398 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:50:13.398 Cannot find device "nvmf_tgt_br2" 00:50:13.398 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:50:13.398 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:50:13.398 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:50:13.398 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:50:13.398 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:50:13.398 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:50:13.398 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:50:13.398 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:50:13.398 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:50:13.398 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:50:13.398 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:50:13.398 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:50:13.398 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:50:13.398 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:50:13.398 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:50:13.398 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:50:13.398 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:50:13.398 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:50:13.398 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:50:13.398 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:50:13.398 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:50:13.398 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:50:13.398 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:50:13.398 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:50:13.398 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:50:13.398 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:50:13.398 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:50:13.398 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:50:13.398 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:50:13.398 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:50:13.398 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:50:13.398 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:50:13.398 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:50:13.398 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:50:13.398 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:50:13.398 00:50:13.398 --- 10.0.0.2 ping statistics --- 00:50:13.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:50:13.398 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:50:13.398 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:50:13.398 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:50:13.398 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:50:13.398 00:50:13.398 --- 10.0.0.3 ping statistics --- 00:50:13.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:50:13.398 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:50:13.398 15:03:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:50:13.398 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:50:13.398 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:50:13.398 00:50:13.398 --- 10.0.0.1 ping statistics --- 00:50:13.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:50:13.398 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:50:13.398 15:03:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:50:13.398 15:03:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:50:13.398 15:03:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:50:13.398 15:03:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:50:13.398 15:03:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:50:13.398 15:03:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:50:13.398 15:03:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:50:13.398 15:03:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:50:13.398 15:03:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:50:13.398 15:03:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:50:13.398 15:03:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:50:13.398 15:03:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@720 -- # xtrace_disable 00:50:13.398 15:03:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:50:13.657 15:03:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=108405 00:50:13.657 15:03:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 108405 00:50:13.657 15:03:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:50:13.657 15:03:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 108405 ']' 00:50:13.657 15:03:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:50:13.657 15:03:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:50:13.657 15:03:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:50:13.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:50:13.657 15:03:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:50:13.657 15:03:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:50:13.657 [2024-07-22 15:03:33.087495] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:50:13.657 [2024-07-22 15:03:33.087617] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:50:13.657 [2024-07-22 15:03:33.227933] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:13.657 [2024-07-22 15:03:33.271598] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:50:13.657 [2024-07-22 15:03:33.271753] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:50:13.657 [2024-07-22 15:03:33.271802] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:50:13.657 [2024-07-22 15:03:33.271827] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:50:13.657 [2024-07-22 15:03:33.271848] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:50:13.657 [2024-07-22 15:03:33.271883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:50:14.593 15:03:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:50:14.593 15:03:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:50:14.593 15:03:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:50:14.593 15:03:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:50:14.593 15:03:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:50:14.593 15:03:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:50:14.593 15:03:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:50:14.593 15:03:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:14.593 15:03:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:50:14.593 [2024-07-22 15:03:33.987068] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:50:14.593 [2024-07-22 15:03:33.995146] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:50:14.593 null0 00:50:14.593 [2024-07-22 15:03:34.027021] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:50:14.593 15:03:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:14.593 15:03:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=108462 00:50:14.593 15:03:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:50:14.593 15:03:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 108462 /tmp/host.sock 00:50:14.593 15:03:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 108462 ']' 00:50:14.593 15:03:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:50:14.593 15:03:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:50:14.593 15:03:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:50:14.593 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:50:14.593 15:03:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:50:14.593 15:03:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:50:14.593 [2024-07-22 15:03:34.101233] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:50:14.593 [2024-07-22 15:03:34.101350] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108462 ] 00:50:14.852 [2024-07-22 15:03:34.239912] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:50:14.852 [2024-07-22 15:03:34.287468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:50:15.419 15:03:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:50:15.419 15:03:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:50:15.419 15:03:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:50:15.419 15:03:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:50:15.419 15:03:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:15.419 15:03:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:50:15.419 15:03:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:15.419 15:03:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:50:15.419 15:03:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:15.419 15:03:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:50:15.419 15:03:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:15.419 15:03:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:50:15.419 15:03:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:15.419 15:03:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:50:16.799 [2024-07-22 15:03:36.015695] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:50:16.799 [2024-07-22 15:03:36.015791] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:50:16.799 [2024-07-22 15:03:36.015818] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:50:16.799 [2024-07-22 15:03:36.103672] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:50:16.799 [2024-07-22 15:03:36.165723] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:50:16.799 [2024-07-22 15:03:36.165808] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:50:16.799 [2024-07-22 15:03:36.165842] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:50:16.799 [2024-07-22 15:03:36.165875] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:50:16.799 [2024-07-22 15:03:36.165932] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:50:16.799 15:03:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:16.799 15:03:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:50:16.799 15:03:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:50:16.799 [2024-07-22 15:03:36.174041] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x15df170 was disconnected and freed. delete nvme_qpair. 00:50:16.799 15:03:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:50:16.799 15:03:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:50:16.799 15:03:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:16.799 15:03:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:50:16.799 15:03:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:50:16.799 15:03:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:50:16.799 15:03:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:16.799 15:03:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:50:16.799 15:03:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:50:16.799 15:03:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:50:16.799 15:03:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:50:16.799 15:03:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:50:16.799 15:03:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:50:16.799 15:03:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:50:16.799 15:03:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:50:16.799 15:03:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:50:16.799 15:03:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:16.799 15:03:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:50:16.799 15:03:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:16.799 15:03:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:50:16.799 15:03:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:50:17.737 15:03:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:50:17.737 15:03:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:50:17.737 15:03:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:50:17.737 15:03:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:17.737 15:03:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:50:17.737 15:03:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:50:17.737 15:03:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:50:17.738 15:03:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:17.738 15:03:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:50:17.738 15:03:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:50:19.125 15:03:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:50:19.125 15:03:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:50:19.125 15:03:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:50:19.125 15:03:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:19.125 15:03:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:50:19.125 15:03:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:50:19.125 15:03:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:50:19.125 15:03:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:19.125 15:03:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:50:19.125 15:03:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:50:20.063 15:03:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:50:20.063 15:03:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:50:20.063 15:03:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:50:20.063 15:03:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:20.063 15:03:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:50:20.063 15:03:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:50:20.063 15:03:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:50:20.064 15:03:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:20.064 15:03:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:50:20.064 15:03:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:50:21.001 15:03:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:50:21.001 15:03:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:50:21.001 15:03:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:50:21.001 15:03:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:21.001 15:03:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:50:21.001 15:03:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:50:21.001 15:03:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:50:21.001 15:03:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:21.001 15:03:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:50:21.001 15:03:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:50:21.939 15:03:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:50:21.939 15:03:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:50:21.939 15:03:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:50:21.939 15:03:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:21.939 15:03:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:50:21.939 15:03:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:50:21.939 15:03:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:50:22.199 15:03:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:22.199 [2024-07-22 15:03:41.593718] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:50:22.199 [2024-07-22 15:03:41.593828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:50:22.199 [2024-07-22 15:03:41.593839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:22.199 [2024-07-22 15:03:41.593848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:50:22.199 [2024-07-22 15:03:41.593854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:22.199 [2024-07-22 15:03:41.593872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:50:22.199 [2024-07-22 15:03:41.593878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:22.199 [2024-07-22 15:03:41.593884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:50:22.199 [2024-07-22 15:03:41.593889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:22.199 [2024-07-22 15:03:41.593895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:50:22.199 [2024-07-22 15:03:41.593900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:22.199 [2024-07-22 15:03:41.593905] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bacf0 is same with the state(5) to be set 00:50:22.199 [2024-07-22 15:03:41.603695] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15bacf0 (9): Bad file descriptor 00:50:22.199 15:03:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:50:22.199 15:03:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:50:22.199 [2024-07-22 15:03:41.613692] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:50:23.138 15:03:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:50:23.138 15:03:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:50:23.138 15:03:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:50:23.138 15:03:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:23.138 15:03:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:50:23.138 15:03:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:50:23.138 15:03:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:50:23.138 [2024-07-22 15:03:42.638750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:50:23.138 [2024-07-22 15:03:42.639015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15bacf0 with addr=10.0.0.2, port=4420 00:50:23.138 [2024-07-22 15:03:42.639065] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bacf0 is same with the state(5) to be set 00:50:23.138 [2024-07-22 15:03:42.639143] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15bacf0 (9): Bad file descriptor 00:50:23.138 [2024-07-22 15:03:42.640297] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:50:23.138 [2024-07-22 15:03:42.640365] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:50:23.138 [2024-07-22 15:03:42.640388] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:50:23.138 [2024-07-22 15:03:42.640411] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:50:23.138 [2024-07-22 15:03:42.640481] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:50:23.138 [2024-07-22 15:03:42.640506] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:50:23.138 15:03:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:23.138 15:03:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:50:23.138 15:03:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:50:24.077 [2024-07-22 15:03:43.638646] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:50:24.077 [2024-07-22 15:03:43.638688] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:50:24.077 [2024-07-22 15:03:43.638711] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:50:24.077 [2024-07-22 15:03:43.638718] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:50:24.077 [2024-07-22 15:03:43.638735] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:50:24.077 [2024-07-22 15:03:43.638760] bdev_nvme.c:6735:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:50:24.077 [2024-07-22 15:03:43.638797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:50:24.077 [2024-07-22 15:03:43.638806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:24.077 [2024-07-22 15:03:43.638814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:50:24.077 [2024-07-22 15:03:43.638820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:24.077 [2024-07-22 15:03:43.638826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:50:24.077 [2024-07-22 15:03:43.638831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:24.077 [2024-07-22 15:03:43.638854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:50:24.077 [2024-07-22 15:03:43.638859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:24.077 [2024-07-22 15:03:43.638865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:50:24.077 [2024-07-22 15:03:43.638871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:50:24.077 [2024-07-22 15:03:43.638877] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:50:24.077 [2024-07-22 15:03:43.639406] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1586ff0 (9): Bad file descriptor 00:50:24.077 [2024-07-22 15:03:43.640411] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:50:24.077 [2024-07-22 15:03:43.640424] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:50:24.077 15:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:50:24.077 15:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:50:24.077 15:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:50:24.077 15:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:24.077 15:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:50:24.077 15:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:50:24.077 15:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:50:24.077 15:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:24.337 15:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:50:24.337 15:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:50:24.337 15:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:50:24.337 15:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:50:24.337 15:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:50:24.337 15:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:50:24.337 15:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:24.337 15:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:50:24.337 15:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:50:24.337 15:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:50:24.337 15:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:50:24.337 15:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:24.337 15:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:50:24.337 15:03:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:50:25.276 15:03:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:50:25.276 15:03:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:50:25.276 15:03:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:50:25.276 15:03:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:25.276 15:03:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:50:25.276 15:03:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:50:25.276 15:03:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:50:25.276 15:03:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:25.276 15:03:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:50:25.276 15:03:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:50:26.214 [2024-07-22 15:03:45.648162] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:50:26.214 [2024-07-22 15:03:45.648267] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:50:26.214 [2024-07-22 15:03:45.648284] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:50:26.214 [2024-07-22 15:03:45.734086] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:50:26.214 [2024-07-22 15:03:45.788587] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:50:26.214 [2024-07-22 15:03:45.788650] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:50:26.214 [2024-07-22 15:03:45.788667] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:50:26.214 [2024-07-22 15:03:45.788680] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:50:26.214 [2024-07-22 15:03:45.788701] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:50:26.214 [2024-07-22 15:03:45.796638] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x15b8750 was disconnected and freed. delete nvme_qpair. 00:50:26.474 15:03:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:50:26.474 15:03:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:50:26.474 15:03:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:26.474 15:03:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:50:26.474 15:03:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:50:26.474 15:03:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:50:26.474 15:03:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:50:26.474 15:03:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:26.474 15:03:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:50:26.474 15:03:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:50:26.474 15:03:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 108462 00:50:26.474 15:03:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 108462 ']' 00:50:26.474 15:03:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 108462 00:50:26.474 15:03:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:50:26.474 15:03:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:50:26.474 15:03:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 108462 00:50:26.474 killing process with pid 108462 00:50:26.474 15:03:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:50:26.474 15:03:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:50:26.474 15:03:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 108462' 00:50:26.474 15:03:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 108462 00:50:26.474 15:03:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 108462 00:50:26.733 15:03:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:50:26.733 15:03:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:50:26.733 15:03:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:50:26.733 15:03:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:50:26.733 15:03:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:50:26.733 15:03:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:50:26.733 15:03:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:50:26.733 rmmod nvme_tcp 00:50:26.733 rmmod nvme_fabrics 00:50:26.733 rmmod nvme_keyring 00:50:26.733 15:03:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:50:26.733 15:03:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:50:26.733 15:03:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:50:26.733 15:03:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 108405 ']' 00:50:26.733 15:03:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 108405 00:50:26.733 15:03:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 108405 ']' 00:50:26.733 15:03:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 108405 00:50:26.733 15:03:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:50:26.733 15:03:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:50:26.733 15:03:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 108405 00:50:26.733 killing process with pid 108405 00:50:26.733 15:03:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:50:26.733 15:03:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:50:26.733 15:03:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 108405' 00:50:26.733 15:03:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 108405 00:50:26.733 15:03:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 108405 00:50:26.993 15:03:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:50:26.993 15:03:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:50:26.993 15:03:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:50:26.993 15:03:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:50:26.993 15:03:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:50:26.993 15:03:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:50:26.993 15:03:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:50:26.993 15:03:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:50:26.993 15:03:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:50:26.993 00:50:26.993 real 0m14.026s 00:50:26.993 user 0m25.087s 00:50:26.993 sys 0m1.540s 00:50:26.993 15:03:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:50:26.993 15:03:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:50:26.993 ************************************ 00:50:26.993 END TEST nvmf_discovery_remove_ifc 00:50:26.993 ************************************ 00:50:26.993 15:03:46 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:50:26.993 15:03:46 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:50:26.993 15:03:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:50:26.993 15:03:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:50:26.993 ************************************ 00:50:26.993 START TEST nvmf_identify_kernel_target 00:50:26.993 ************************************ 00:50:26.993 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:50:27.254 * Looking for test storage... 00:50:27.254 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:50:27.254 Cannot find device "nvmf_tgt_br" 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:50:27.254 Cannot find device "nvmf_tgt_br2" 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:50:27.254 Cannot find device "nvmf_tgt_br" 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:50:27.254 Cannot find device "nvmf_tgt_br2" 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:50:27.254 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:50:27.514 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:50:27.514 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:50:27.514 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:50:27.514 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:50:27.514 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:50:27.514 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:50:27.514 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:50:27.514 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:50:27.514 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:50:27.514 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:50:27.514 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:50:27.514 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:50:27.514 15:03:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:50:27.514 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:50:27.514 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:50:27.514 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:50:27.514 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:50:27.514 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:50:27.514 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:50:27.514 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:50:27.514 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:50:27.514 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:50:27.514 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:50:27.514 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:50:27.514 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:50:27.514 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:50:27.514 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:50:27.514 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:50:27.514 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:50:27.514 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:50:27.514 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:50:27.514 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:50:27.514 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:50:27.514 00:50:27.514 --- 10.0.0.2 ping statistics --- 00:50:27.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:50:27.514 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:50:27.514 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:50:27.514 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:50:27.514 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.030 ms 00:50:27.514 00:50:27.514 --- 10.0.0.3 ping statistics --- 00:50:27.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:50:27.514 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:50:27.514 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:50:27.514 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:50:27.514 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.016 ms 00:50:27.514 00:50:27.514 --- 10.0.0.1 ping statistics --- 00:50:27.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:50:27.514 rtt min/avg/max/mdev = 0.016/0.016/0.016/0.000 ms 00:50:27.514 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:50:27.514 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:50:27.514 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:50:27.514 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:50:27.514 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:50:27.514 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:50:27.514 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:50:27.514 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:50:27.514 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:50:27.514 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:50:27.514 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:50:27.514 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:50:27.514 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:27.514 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:27.514 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:27.514 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:27.514 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:27.514 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:27.514 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:27.514 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:27.514 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:27.515 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:50:27.515 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:50:27.515 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:50:27.515 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:50:27.515 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:50:27.515 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:50:27.515 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:50:27.515 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:50:27.515 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:50:27.515 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:50:27.515 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:50:27.515 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:50:28.082 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:50:28.082 Waiting for block devices as requested 00:50:28.082 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:50:28.342 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:50:28.342 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:50:28.342 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:50:28.342 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:50:28.342 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:50:28.342 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:50:28.342 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:50:28.342 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:50:28.342 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:50:28.342 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:50:28.342 No valid GPT data, bailing 00:50:28.342 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:50:28.342 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:50:28.342 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:50:28.342 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:50:28.342 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:50:28.342 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:50:28.342 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:50:28.342 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:50:28.342 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:50:28.342 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:50:28.342 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:50:28.342 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:50:28.342 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:50:28.342 No valid GPT data, bailing 00:50:28.342 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:50:28.342 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:50:28.342 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:50:28.342 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:50:28.342 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:50:28.342 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:50:28.343 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:50:28.343 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:50:28.343 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:50:28.343 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:50:28.343 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:50:28.343 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:50:28.343 15:03:47 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:50:28.602 No valid GPT data, bailing 00:50:28.602 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:50:28.602 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:50:28.602 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:50:28.602 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:50:28.602 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:50:28.602 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:50:28.602 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:50:28.602 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:50:28.602 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:50:28.602 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:50:28.602 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:50:28.602 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:50:28.602 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:50:28.602 No valid GPT data, bailing 00:50:28.602 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:50:28.602 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:50:28.602 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:50:28.602 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:50:28.602 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:50:28.602 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:50:28.602 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:50:28.602 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:50:28.602 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:50:28.602 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:50:28.602 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:50:28.602 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:50:28.602 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:50:28.602 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:50:28.602 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:50:28.602 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:50:28.602 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:50:28.602 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -a 10.0.0.1 -t tcp -s 4420 00:50:28.602 00:50:28.602 Discovery Log Number of Records 2, Generation counter 2 00:50:28.602 =====Discovery Log Entry 0====== 00:50:28.602 trtype: tcp 00:50:28.602 adrfam: ipv4 00:50:28.602 subtype: current discovery subsystem 00:50:28.602 treq: not specified, sq flow control disable supported 00:50:28.602 portid: 1 00:50:28.602 trsvcid: 4420 00:50:28.602 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:50:28.602 traddr: 10.0.0.1 00:50:28.602 eflags: none 00:50:28.602 sectype: none 00:50:28.602 =====Discovery Log Entry 1====== 00:50:28.602 trtype: tcp 00:50:28.602 adrfam: ipv4 00:50:28.602 subtype: nvme subsystem 00:50:28.602 treq: not specified, sq flow control disable supported 00:50:28.602 portid: 1 00:50:28.602 trsvcid: 4420 00:50:28.602 subnqn: nqn.2016-06.io.spdk:testnqn 00:50:28.602 traddr: 10.0.0.1 00:50:28.602 eflags: none 00:50:28.602 sectype: none 00:50:28.603 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:50:28.603 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:50:28.865 ===================================================== 00:50:28.865 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:50:28.865 ===================================================== 00:50:28.865 Controller Capabilities/Features 00:50:28.865 ================================ 00:50:28.865 Vendor ID: 0000 00:50:28.865 Subsystem Vendor ID: 0000 00:50:28.865 Serial Number: f177b9ec7fd4736900c3 00:50:28.865 Model Number: Linux 00:50:28.865 Firmware Version: 6.7.0-68 00:50:28.865 Recommended Arb Burst: 0 00:50:28.865 IEEE OUI Identifier: 00 00 00 00:50:28.865 Multi-path I/O 00:50:28.865 May have multiple subsystem ports: No 00:50:28.865 May have multiple controllers: No 00:50:28.865 Associated with SR-IOV VF: No 00:50:28.865 Max Data Transfer Size: Unlimited 00:50:28.865 Max Number of Namespaces: 0 00:50:28.865 Max Number of I/O Queues: 1024 00:50:28.865 NVMe Specification Version (VS): 1.3 00:50:28.865 NVMe Specification Version (Identify): 1.3 00:50:28.865 Maximum Queue Entries: 1024 00:50:28.865 Contiguous Queues Required: No 00:50:28.865 Arbitration Mechanisms Supported 00:50:28.865 Weighted Round Robin: Not Supported 00:50:28.865 Vendor Specific: Not Supported 00:50:28.865 Reset Timeout: 7500 ms 00:50:28.865 Doorbell Stride: 4 bytes 00:50:28.865 NVM Subsystem Reset: Not Supported 00:50:28.865 Command Sets Supported 00:50:28.865 NVM Command Set: Supported 00:50:28.865 Boot Partition: Not Supported 00:50:28.865 Memory Page Size Minimum: 4096 bytes 00:50:28.865 Memory Page Size Maximum: 4096 bytes 00:50:28.865 Persistent Memory Region: Not Supported 00:50:28.865 Optional Asynchronous Events Supported 00:50:28.865 Namespace Attribute Notices: Not Supported 00:50:28.865 Firmware Activation Notices: Not Supported 00:50:28.865 ANA Change Notices: Not Supported 00:50:28.865 PLE Aggregate Log Change Notices: Not Supported 00:50:28.865 LBA Status Info Alert Notices: Not Supported 00:50:28.865 EGE Aggregate Log Change Notices: Not Supported 00:50:28.865 Normal NVM Subsystem Shutdown event: Not Supported 00:50:28.865 Zone Descriptor Change Notices: Not Supported 00:50:28.865 Discovery Log Change Notices: Supported 00:50:28.865 Controller Attributes 00:50:28.865 128-bit Host Identifier: Not Supported 00:50:28.865 Non-Operational Permissive Mode: Not Supported 00:50:28.865 NVM Sets: Not Supported 00:50:28.865 Read Recovery Levels: Not Supported 00:50:28.865 Endurance Groups: Not Supported 00:50:28.865 Predictable Latency Mode: Not Supported 00:50:28.865 Traffic Based Keep ALive: Not Supported 00:50:28.865 Namespace Granularity: Not Supported 00:50:28.865 SQ Associations: Not Supported 00:50:28.865 UUID List: Not Supported 00:50:28.865 Multi-Domain Subsystem: Not Supported 00:50:28.865 Fixed Capacity Management: Not Supported 00:50:28.865 Variable Capacity Management: Not Supported 00:50:28.865 Delete Endurance Group: Not Supported 00:50:28.865 Delete NVM Set: Not Supported 00:50:28.865 Extended LBA Formats Supported: Not Supported 00:50:28.865 Flexible Data Placement Supported: Not Supported 00:50:28.865 00:50:28.865 Controller Memory Buffer Support 00:50:28.865 ================================ 00:50:28.865 Supported: No 00:50:28.865 00:50:28.865 Persistent Memory Region Support 00:50:28.865 ================================ 00:50:28.865 Supported: No 00:50:28.865 00:50:28.865 Admin Command Set Attributes 00:50:28.865 ============================ 00:50:28.865 Security Send/Receive: Not Supported 00:50:28.865 Format NVM: Not Supported 00:50:28.865 Firmware Activate/Download: Not Supported 00:50:28.865 Namespace Management: Not Supported 00:50:28.865 Device Self-Test: Not Supported 00:50:28.865 Directives: Not Supported 00:50:28.865 NVMe-MI: Not Supported 00:50:28.865 Virtualization Management: Not Supported 00:50:28.865 Doorbell Buffer Config: Not Supported 00:50:28.865 Get LBA Status Capability: Not Supported 00:50:28.865 Command & Feature Lockdown Capability: Not Supported 00:50:28.865 Abort Command Limit: 1 00:50:28.865 Async Event Request Limit: 1 00:50:28.865 Number of Firmware Slots: N/A 00:50:28.865 Firmware Slot 1 Read-Only: N/A 00:50:28.865 Firmware Activation Without Reset: N/A 00:50:28.865 Multiple Update Detection Support: N/A 00:50:28.865 Firmware Update Granularity: No Information Provided 00:50:28.865 Per-Namespace SMART Log: No 00:50:28.865 Asymmetric Namespace Access Log Page: Not Supported 00:50:28.865 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:50:28.865 Command Effects Log Page: Not Supported 00:50:28.865 Get Log Page Extended Data: Supported 00:50:28.865 Telemetry Log Pages: Not Supported 00:50:28.865 Persistent Event Log Pages: Not Supported 00:50:28.865 Supported Log Pages Log Page: May Support 00:50:28.865 Commands Supported & Effects Log Page: Not Supported 00:50:28.865 Feature Identifiers & Effects Log Page:May Support 00:50:28.865 NVMe-MI Commands & Effects Log Page: May Support 00:50:28.865 Data Area 4 for Telemetry Log: Not Supported 00:50:28.865 Error Log Page Entries Supported: 1 00:50:28.865 Keep Alive: Not Supported 00:50:28.865 00:50:28.865 NVM Command Set Attributes 00:50:28.865 ========================== 00:50:28.865 Submission Queue Entry Size 00:50:28.865 Max: 1 00:50:28.865 Min: 1 00:50:28.865 Completion Queue Entry Size 00:50:28.865 Max: 1 00:50:28.865 Min: 1 00:50:28.865 Number of Namespaces: 0 00:50:28.865 Compare Command: Not Supported 00:50:28.865 Write Uncorrectable Command: Not Supported 00:50:28.865 Dataset Management Command: Not Supported 00:50:28.865 Write Zeroes Command: Not Supported 00:50:28.865 Set Features Save Field: Not Supported 00:50:28.865 Reservations: Not Supported 00:50:28.865 Timestamp: Not Supported 00:50:28.865 Copy: Not Supported 00:50:28.865 Volatile Write Cache: Not Present 00:50:28.865 Atomic Write Unit (Normal): 1 00:50:28.865 Atomic Write Unit (PFail): 1 00:50:28.865 Atomic Compare & Write Unit: 1 00:50:28.865 Fused Compare & Write: Not Supported 00:50:28.865 Scatter-Gather List 00:50:28.865 SGL Command Set: Supported 00:50:28.865 SGL Keyed: Not Supported 00:50:28.865 SGL Bit Bucket Descriptor: Not Supported 00:50:28.865 SGL Metadata Pointer: Not Supported 00:50:28.865 Oversized SGL: Not Supported 00:50:28.865 SGL Metadata Address: Not Supported 00:50:28.865 SGL Offset: Supported 00:50:28.865 Transport SGL Data Block: Not Supported 00:50:28.865 Replay Protected Memory Block: Not Supported 00:50:28.865 00:50:28.865 Firmware Slot Information 00:50:28.865 ========================= 00:50:28.865 Active slot: 0 00:50:28.865 00:50:28.865 00:50:28.865 Error Log 00:50:28.865 ========= 00:50:28.865 00:50:28.865 Active Namespaces 00:50:28.865 ================= 00:50:28.865 Discovery Log Page 00:50:28.865 ================== 00:50:28.865 Generation Counter: 2 00:50:28.865 Number of Records: 2 00:50:28.865 Record Format: 0 00:50:28.865 00:50:28.865 Discovery Log Entry 0 00:50:28.865 ---------------------- 00:50:28.865 Transport Type: 3 (TCP) 00:50:28.865 Address Family: 1 (IPv4) 00:50:28.865 Subsystem Type: 3 (Current Discovery Subsystem) 00:50:28.865 Entry Flags: 00:50:28.865 Duplicate Returned Information: 0 00:50:28.865 Explicit Persistent Connection Support for Discovery: 0 00:50:28.865 Transport Requirements: 00:50:28.865 Secure Channel: Not Specified 00:50:28.865 Port ID: 1 (0x0001) 00:50:28.865 Controller ID: 65535 (0xffff) 00:50:28.865 Admin Max SQ Size: 32 00:50:28.865 Transport Service Identifier: 4420 00:50:28.865 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:50:28.865 Transport Address: 10.0.0.1 00:50:28.866 Discovery Log Entry 1 00:50:28.866 ---------------------- 00:50:28.866 Transport Type: 3 (TCP) 00:50:28.866 Address Family: 1 (IPv4) 00:50:28.866 Subsystem Type: 2 (NVM Subsystem) 00:50:28.866 Entry Flags: 00:50:28.866 Duplicate Returned Information: 0 00:50:28.866 Explicit Persistent Connection Support for Discovery: 0 00:50:28.866 Transport Requirements: 00:50:28.866 Secure Channel: Not Specified 00:50:28.866 Port ID: 1 (0x0001) 00:50:28.866 Controller ID: 65535 (0xffff) 00:50:28.866 Admin Max SQ Size: 32 00:50:28.866 Transport Service Identifier: 4420 00:50:28.866 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:50:28.866 Transport Address: 10.0.0.1 00:50:28.866 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:50:28.866 get_feature(0x01) failed 00:50:28.866 get_feature(0x02) failed 00:50:28.866 get_feature(0x04) failed 00:50:28.866 ===================================================== 00:50:28.866 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:50:28.866 ===================================================== 00:50:28.866 Controller Capabilities/Features 00:50:28.866 ================================ 00:50:28.866 Vendor ID: 0000 00:50:28.866 Subsystem Vendor ID: 0000 00:50:28.866 Serial Number: cebb15bbf320b45c85ba 00:50:28.866 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:50:28.866 Firmware Version: 6.7.0-68 00:50:28.866 Recommended Arb Burst: 6 00:50:28.866 IEEE OUI Identifier: 00 00 00 00:50:28.866 Multi-path I/O 00:50:28.866 May have multiple subsystem ports: Yes 00:50:28.866 May have multiple controllers: Yes 00:50:28.866 Associated with SR-IOV VF: No 00:50:28.866 Max Data Transfer Size: Unlimited 00:50:28.866 Max Number of Namespaces: 1024 00:50:28.866 Max Number of I/O Queues: 128 00:50:28.866 NVMe Specification Version (VS): 1.3 00:50:28.866 NVMe Specification Version (Identify): 1.3 00:50:28.866 Maximum Queue Entries: 1024 00:50:28.866 Contiguous Queues Required: No 00:50:28.866 Arbitration Mechanisms Supported 00:50:28.866 Weighted Round Robin: Not Supported 00:50:28.866 Vendor Specific: Not Supported 00:50:28.866 Reset Timeout: 7500 ms 00:50:28.866 Doorbell Stride: 4 bytes 00:50:28.866 NVM Subsystem Reset: Not Supported 00:50:28.866 Command Sets Supported 00:50:28.866 NVM Command Set: Supported 00:50:28.866 Boot Partition: Not Supported 00:50:28.866 Memory Page Size Minimum: 4096 bytes 00:50:28.866 Memory Page Size Maximum: 4096 bytes 00:50:28.866 Persistent Memory Region: Not Supported 00:50:28.866 Optional Asynchronous Events Supported 00:50:28.866 Namespace Attribute Notices: Supported 00:50:28.866 Firmware Activation Notices: Not Supported 00:50:28.866 ANA Change Notices: Supported 00:50:28.866 PLE Aggregate Log Change Notices: Not Supported 00:50:28.866 LBA Status Info Alert Notices: Not Supported 00:50:28.866 EGE Aggregate Log Change Notices: Not Supported 00:50:28.866 Normal NVM Subsystem Shutdown event: Not Supported 00:50:28.866 Zone Descriptor Change Notices: Not Supported 00:50:28.866 Discovery Log Change Notices: Not Supported 00:50:28.866 Controller Attributes 00:50:28.866 128-bit Host Identifier: Supported 00:50:28.866 Non-Operational Permissive Mode: Not Supported 00:50:28.866 NVM Sets: Not Supported 00:50:28.866 Read Recovery Levels: Not Supported 00:50:28.866 Endurance Groups: Not Supported 00:50:28.866 Predictable Latency Mode: Not Supported 00:50:28.866 Traffic Based Keep ALive: Supported 00:50:28.866 Namespace Granularity: Not Supported 00:50:28.866 SQ Associations: Not Supported 00:50:28.866 UUID List: Not Supported 00:50:28.866 Multi-Domain Subsystem: Not Supported 00:50:28.866 Fixed Capacity Management: Not Supported 00:50:28.866 Variable Capacity Management: Not Supported 00:50:28.866 Delete Endurance Group: Not Supported 00:50:28.866 Delete NVM Set: Not Supported 00:50:28.866 Extended LBA Formats Supported: Not Supported 00:50:28.866 Flexible Data Placement Supported: Not Supported 00:50:28.866 00:50:28.866 Controller Memory Buffer Support 00:50:28.866 ================================ 00:50:28.866 Supported: No 00:50:28.866 00:50:28.866 Persistent Memory Region Support 00:50:28.866 ================================ 00:50:28.866 Supported: No 00:50:28.866 00:50:28.866 Admin Command Set Attributes 00:50:28.866 ============================ 00:50:28.866 Security Send/Receive: Not Supported 00:50:28.866 Format NVM: Not Supported 00:50:28.866 Firmware Activate/Download: Not Supported 00:50:28.866 Namespace Management: Not Supported 00:50:28.866 Device Self-Test: Not Supported 00:50:28.866 Directives: Not Supported 00:50:28.866 NVMe-MI: Not Supported 00:50:28.866 Virtualization Management: Not Supported 00:50:28.866 Doorbell Buffer Config: Not Supported 00:50:28.866 Get LBA Status Capability: Not Supported 00:50:28.866 Command & Feature Lockdown Capability: Not Supported 00:50:28.866 Abort Command Limit: 4 00:50:28.866 Async Event Request Limit: 4 00:50:28.866 Number of Firmware Slots: N/A 00:50:28.866 Firmware Slot 1 Read-Only: N/A 00:50:28.866 Firmware Activation Without Reset: N/A 00:50:28.866 Multiple Update Detection Support: N/A 00:50:28.866 Firmware Update Granularity: No Information Provided 00:50:28.866 Per-Namespace SMART Log: Yes 00:50:28.866 Asymmetric Namespace Access Log Page: Supported 00:50:28.866 ANA Transition Time : 10 sec 00:50:28.866 00:50:28.866 Asymmetric Namespace Access Capabilities 00:50:28.866 ANA Optimized State : Supported 00:50:28.866 ANA Non-Optimized State : Supported 00:50:28.866 ANA Inaccessible State : Supported 00:50:28.866 ANA Persistent Loss State : Supported 00:50:28.866 ANA Change State : Supported 00:50:28.866 ANAGRPID is not changed : No 00:50:28.866 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:50:28.866 00:50:28.866 ANA Group Identifier Maximum : 128 00:50:28.866 Number of ANA Group Identifiers : 128 00:50:28.866 Max Number of Allowed Namespaces : 1024 00:50:28.866 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:50:28.866 Command Effects Log Page: Supported 00:50:28.866 Get Log Page Extended Data: Supported 00:50:28.866 Telemetry Log Pages: Not Supported 00:50:28.866 Persistent Event Log Pages: Not Supported 00:50:28.866 Supported Log Pages Log Page: May Support 00:50:28.866 Commands Supported & Effects Log Page: Not Supported 00:50:28.866 Feature Identifiers & Effects Log Page:May Support 00:50:28.866 NVMe-MI Commands & Effects Log Page: May Support 00:50:28.866 Data Area 4 for Telemetry Log: Not Supported 00:50:28.866 Error Log Page Entries Supported: 128 00:50:28.866 Keep Alive: Supported 00:50:28.866 Keep Alive Granularity: 1000 ms 00:50:28.866 00:50:28.866 NVM Command Set Attributes 00:50:28.866 ========================== 00:50:28.866 Submission Queue Entry Size 00:50:28.866 Max: 64 00:50:28.866 Min: 64 00:50:28.866 Completion Queue Entry Size 00:50:28.866 Max: 16 00:50:28.866 Min: 16 00:50:28.866 Number of Namespaces: 1024 00:50:28.866 Compare Command: Not Supported 00:50:28.866 Write Uncorrectable Command: Not Supported 00:50:28.866 Dataset Management Command: Supported 00:50:28.866 Write Zeroes Command: Supported 00:50:28.866 Set Features Save Field: Not Supported 00:50:28.866 Reservations: Not Supported 00:50:28.866 Timestamp: Not Supported 00:50:28.866 Copy: Not Supported 00:50:28.866 Volatile Write Cache: Present 00:50:28.866 Atomic Write Unit (Normal): 1 00:50:28.866 Atomic Write Unit (PFail): 1 00:50:28.866 Atomic Compare & Write Unit: 1 00:50:28.866 Fused Compare & Write: Not Supported 00:50:28.866 Scatter-Gather List 00:50:28.866 SGL Command Set: Supported 00:50:28.866 SGL Keyed: Not Supported 00:50:28.866 SGL Bit Bucket Descriptor: Not Supported 00:50:28.866 SGL Metadata Pointer: Not Supported 00:50:28.866 Oversized SGL: Not Supported 00:50:28.866 SGL Metadata Address: Not Supported 00:50:28.866 SGL Offset: Supported 00:50:28.866 Transport SGL Data Block: Not Supported 00:50:28.866 Replay Protected Memory Block: Not Supported 00:50:28.866 00:50:28.866 Firmware Slot Information 00:50:28.866 ========================= 00:50:28.866 Active slot: 0 00:50:28.866 00:50:28.866 Asymmetric Namespace Access 00:50:28.866 =========================== 00:50:28.866 Change Count : 0 00:50:28.866 Number of ANA Group Descriptors : 1 00:50:28.866 ANA Group Descriptor : 0 00:50:28.866 ANA Group ID : 1 00:50:28.866 Number of NSID Values : 1 00:50:28.866 Change Count : 0 00:50:28.866 ANA State : 1 00:50:28.866 Namespace Identifier : 1 00:50:28.866 00:50:28.866 Commands Supported and Effects 00:50:28.866 ============================== 00:50:28.866 Admin Commands 00:50:28.866 -------------- 00:50:28.867 Get Log Page (02h): Supported 00:50:28.867 Identify (06h): Supported 00:50:28.867 Abort (08h): Supported 00:50:28.867 Set Features (09h): Supported 00:50:28.867 Get Features (0Ah): Supported 00:50:28.867 Asynchronous Event Request (0Ch): Supported 00:50:28.867 Keep Alive (18h): Supported 00:50:28.867 I/O Commands 00:50:28.867 ------------ 00:50:28.867 Flush (00h): Supported 00:50:28.867 Write (01h): Supported LBA-Change 00:50:28.867 Read (02h): Supported 00:50:28.867 Write Zeroes (08h): Supported LBA-Change 00:50:28.867 Dataset Management (09h): Supported 00:50:28.867 00:50:28.867 Error Log 00:50:28.867 ========= 00:50:28.867 Entry: 0 00:50:28.867 Error Count: 0x3 00:50:28.867 Submission Queue Id: 0x0 00:50:28.867 Command Id: 0x5 00:50:28.867 Phase Bit: 0 00:50:28.867 Status Code: 0x2 00:50:28.867 Status Code Type: 0x0 00:50:28.867 Do Not Retry: 1 00:50:29.129 Error Location: 0x28 00:50:29.129 LBA: 0x0 00:50:29.129 Namespace: 0x0 00:50:29.129 Vendor Log Page: 0x0 00:50:29.129 ----------- 00:50:29.129 Entry: 1 00:50:29.129 Error Count: 0x2 00:50:29.129 Submission Queue Id: 0x0 00:50:29.129 Command Id: 0x5 00:50:29.129 Phase Bit: 0 00:50:29.129 Status Code: 0x2 00:50:29.129 Status Code Type: 0x0 00:50:29.129 Do Not Retry: 1 00:50:29.129 Error Location: 0x28 00:50:29.129 LBA: 0x0 00:50:29.129 Namespace: 0x0 00:50:29.129 Vendor Log Page: 0x0 00:50:29.129 ----------- 00:50:29.129 Entry: 2 00:50:29.129 Error Count: 0x1 00:50:29.129 Submission Queue Id: 0x0 00:50:29.129 Command Id: 0x4 00:50:29.129 Phase Bit: 0 00:50:29.129 Status Code: 0x2 00:50:29.129 Status Code Type: 0x0 00:50:29.129 Do Not Retry: 1 00:50:29.129 Error Location: 0x28 00:50:29.129 LBA: 0x0 00:50:29.129 Namespace: 0x0 00:50:29.129 Vendor Log Page: 0x0 00:50:29.129 00:50:29.129 Number of Queues 00:50:29.129 ================ 00:50:29.129 Number of I/O Submission Queues: 128 00:50:29.129 Number of I/O Completion Queues: 128 00:50:29.129 00:50:29.129 ZNS Specific Controller Data 00:50:29.129 ============================ 00:50:29.129 Zone Append Size Limit: 0 00:50:29.129 00:50:29.129 00:50:29.129 Active Namespaces 00:50:29.129 ================= 00:50:29.129 get_feature(0x05) failed 00:50:29.129 Namespace ID:1 00:50:29.129 Command Set Identifier: NVM (00h) 00:50:29.129 Deallocate: Supported 00:50:29.129 Deallocated/Unwritten Error: Not Supported 00:50:29.130 Deallocated Read Value: Unknown 00:50:29.130 Deallocate in Write Zeroes: Not Supported 00:50:29.130 Deallocated Guard Field: 0xFFFF 00:50:29.130 Flush: Supported 00:50:29.130 Reservation: Not Supported 00:50:29.130 Namespace Sharing Capabilities: Multiple Controllers 00:50:29.130 Size (in LBAs): 1310720 (5GiB) 00:50:29.130 Capacity (in LBAs): 1310720 (5GiB) 00:50:29.130 Utilization (in LBAs): 1310720 (5GiB) 00:50:29.130 UUID: 38603271-bb82-4ecc-8edc-9dbe5521b176 00:50:29.130 Thin Provisioning: Not Supported 00:50:29.130 Per-NS Atomic Units: Yes 00:50:29.130 Atomic Boundary Size (Normal): 0 00:50:29.130 Atomic Boundary Size (PFail): 0 00:50:29.130 Atomic Boundary Offset: 0 00:50:29.130 NGUID/EUI64 Never Reused: No 00:50:29.130 ANA group ID: 1 00:50:29.130 Namespace Write Protected: No 00:50:29.130 Number of LBA Formats: 1 00:50:29.130 Current LBA Format: LBA Format #00 00:50:29.130 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:50:29.130 00:50:29.130 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:50:29.130 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:50:29.130 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:50:29.130 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:50:29.130 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:50:29.130 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:50:29.130 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:50:29.130 rmmod nvme_tcp 00:50:29.130 rmmod nvme_fabrics 00:50:29.130 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:50:29.130 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:50:29.130 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:50:29.130 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:50:29.130 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:50:29.130 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:50:29.130 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:50:29.130 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:50:29.130 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:50:29.130 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:50:29.130 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:50:29.130 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:50:29.130 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:50:29.130 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:50:29.130 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:50:29.130 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:50:29.130 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:50:29.130 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:50:29.130 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:50:29.130 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:50:29.130 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:50:29.130 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:50:29.130 15:03:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:50:30.066 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:50:30.066 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:50:30.066 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:50:30.325 00:50:30.325 real 0m3.146s 00:50:30.325 user 0m1.024s 00:50:30.325 sys 0m1.665s 00:50:30.325 15:03:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:50:30.325 15:03:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:50:30.325 ************************************ 00:50:30.325 END TEST nvmf_identify_kernel_target 00:50:30.325 ************************************ 00:50:30.325 15:03:49 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:50:30.325 15:03:49 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:50:30.325 15:03:49 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:50:30.325 15:03:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:50:30.325 ************************************ 00:50:30.325 START TEST nvmf_auth_host 00:50:30.325 ************************************ 00:50:30.325 15:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:50:30.325 * Looking for test storage... 00:50:30.325 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:50:30.325 15:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:50:30.325 15:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:50:30.325 15:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:50:30.325 15:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:50:30.325 15:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:50:30.325 15:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:50:30.325 15:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:50:30.325 15:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:50:30.325 15:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:50:30.325 15:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:50:30.325 15:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:50:30.325 15:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:50:30.325 15:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:50:30.325 15:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:50:30.325 15:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:50:30.325 15:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:50:30.584 15:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:50:30.584 15:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:50:30.584 15:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:50:30.584 15:03:49 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:50:30.584 15:03:49 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:50:30.584 15:03:49 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:50:30.584 15:03:49 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:30.584 15:03:49 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:30.584 15:03:49 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:30.584 15:03:49 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:50:30.584 15:03:49 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:50:30.585 15:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:50:30.585 15:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:50:30.585 15:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:50:30.585 15:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:50:30.585 15:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:50:30.585 15:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:50:30.585 15:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:50:30.585 15:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:50:30.585 15:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:50:30.585 15:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:50:30.585 15:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:50:30.585 15:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:50:30.585 15:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:50:30.585 15:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:50:30.585 15:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:50:30.585 15:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:50:30.585 15:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:50:30.585 15:03:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:50:30.585 15:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:50:30.585 15:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:50:30.585 15:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:50:30.585 15:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:50:30.585 15:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:50:30.585 15:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:50:30.585 15:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:50:30.585 15:03:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:50:30.585 15:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:50:30.585 15:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:50:30.585 15:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:50:30.585 15:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:50:30.585 15:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:50:30.585 15:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:50:30.585 15:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:50:30.585 15:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:50:30.585 15:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:50:30.585 15:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:50:30.585 15:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:50:30.585 15:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:50:30.585 15:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:50:30.585 15:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:50:30.585 15:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:50:30.585 15:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:50:30.585 15:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:50:30.585 15:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:50:30.585 15:03:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:50:30.585 15:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:50:30.585 Cannot find device "nvmf_tgt_br" 00:50:30.585 15:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:50:30.585 15:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:50:30.585 Cannot find device "nvmf_tgt_br2" 00:50:30.585 15:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:50:30.585 15:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:50:30.585 15:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:50:30.585 Cannot find device "nvmf_tgt_br" 00:50:30.585 15:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:50:30.585 15:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:50:30.585 Cannot find device "nvmf_tgt_br2" 00:50:30.585 15:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:50:30.585 15:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:50:30.585 15:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:50:30.585 15:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:50:30.585 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:50:30.585 15:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:50:30.585 15:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:50:30.585 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:50:30.585 15:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:50:30.585 15:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:50:30.585 15:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:50:30.585 15:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:50:30.585 15:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:50:30.585 15:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:50:30.844 15:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:50:30.844 15:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:50:30.844 15:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:50:30.844 15:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:50:30.844 15:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:50:30.844 15:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:50:30.844 15:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:50:30.844 15:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:50:30.844 15:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:50:30.844 15:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:50:30.844 15:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:50:30.844 15:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:50:30.844 15:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:50:30.844 15:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:50:30.844 15:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:50:30.844 15:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:50:30.844 15:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:50:30.844 15:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:50:30.844 15:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:50:30.844 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:50:30.844 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:50:30.844 00:50:30.844 --- 10.0.0.2 ping statistics --- 00:50:30.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:50:30.844 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:50:30.844 15:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:50:30.844 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:50:30.844 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:50:30.844 00:50:30.844 --- 10.0.0.3 ping statistics --- 00:50:30.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:50:30.844 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:50:30.844 15:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:50:30.844 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:50:30.844 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:50:30.844 00:50:30.844 --- 10.0.0.1 ping statistics --- 00:50:30.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:50:30.844 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:50:30.844 15:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:50:30.844 15:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:50:30.844 15:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:50:30.844 15:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:50:30.844 15:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:50:30.844 15:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:50:30.844 15:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:50:30.844 15:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:50:30.844 15:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:50:30.844 15:03:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:50:30.844 15:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:50:30.844 15:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:50:30.844 15:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:30.844 15:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=109352 00:50:30.844 15:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:50:30.844 15:03:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 109352 00:50:30.844 15:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 109352 ']' 00:50:30.844 15:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:50:30.844 15:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:50:30.844 15:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:50:30.844 15:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:50:30.844 15:03:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:31.781 15:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:50:31.781 15:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:50:31.781 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:50:31.781 15:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:50:31.781 15:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:31.781 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:50:31.781 15:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:50:31.781 15:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:50:31.781 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:50:31.781 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:50:31.781 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:50:31.781 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:50:31.781 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:50:31.781 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:50:31.781 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1846e8672cd86e2432b3d9265c9b616c 00:50:31.781 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:50:31.781 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.1Yp 00:50:31.781 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1846e8672cd86e2432b3d9265c9b616c 0 00:50:31.781 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1846e8672cd86e2432b3d9265c9b616c 0 00:50:31.781 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:50:31.781 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:50:31.781 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1846e8672cd86e2432b3d9265c9b616c 00:50:31.781 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:50:31.781 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:50:31.781 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.1Yp 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.1Yp 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.1Yp 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3fbc36dc7298d138a7ed8cc3a1ad2414ffac431ae4f29b420fb0624c098ef870 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.7Yh 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3fbc36dc7298d138a7ed8cc3a1ad2414ffac431ae4f29b420fb0624c098ef870 3 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3fbc36dc7298d138a7ed8cc3a1ad2414ffac431ae4f29b420fb0624c098ef870 3 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3fbc36dc7298d138a7ed8cc3a1ad2414ffac431ae4f29b420fb0624c098ef870 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.7Yh 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.7Yh 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.7Yh 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2e2dd61d07b4e08b2510ec80010618694b81221546d7f4ed 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.sru 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2e2dd61d07b4e08b2510ec80010618694b81221546d7f4ed 0 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2e2dd61d07b4e08b2510ec80010618694b81221546d7f4ed 0 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2e2dd61d07b4e08b2510ec80010618694b81221546d7f4ed 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.sru 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.sru 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.sru 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4cbd576d09a2565c4389e1109de84fd6dd95d11043398431 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.s1y 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4cbd576d09a2565c4389e1109de84fd6dd95d11043398431 2 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4cbd576d09a2565c4389e1109de84fd6dd95d11043398431 2 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4cbd576d09a2565c4389e1109de84fd6dd95d11043398431 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.s1y 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.s1y 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.s1y 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=af3c855daa2ace5be57b0567bdd7dd71 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.YRY 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key af3c855daa2ace5be57b0567bdd7dd71 1 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 af3c855daa2ace5be57b0567bdd7dd71 1 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=af3c855daa2ace5be57b0567bdd7dd71 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:50:32.040 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:50:32.299 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.YRY 00:50:32.299 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.YRY 00:50:32.299 15:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.YRY 00:50:32.299 15:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:50:32.299 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:50:32.299 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:50:32.299 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:50:32.299 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:50:32.299 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:50:32.299 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:50:32.299 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4d6c1cee4fbd365241e44ea7ac920e06 00:50:32.299 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:50:32.299 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.plU 00:50:32.299 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4d6c1cee4fbd365241e44ea7ac920e06 1 00:50:32.299 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4d6c1cee4fbd365241e44ea7ac920e06 1 00:50:32.299 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:50:32.299 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:50:32.299 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4d6c1cee4fbd365241e44ea7ac920e06 00:50:32.299 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:50:32.299 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:50:32.299 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.plU 00:50:32.299 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.plU 00:50:32.299 15:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.plU 00:50:32.299 15:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:50:32.299 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:50:32.299 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:50:32.299 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:50:32.299 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:50:32.299 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:50:32.299 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:50:32.299 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d26137b165367e40fe52c420b2d0da13fa17774427f499a0 00:50:32.299 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:50:32.299 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.YVT 00:50:32.299 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d26137b165367e40fe52c420b2d0da13fa17774427f499a0 2 00:50:32.299 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d26137b165367e40fe52c420b2d0da13fa17774427f499a0 2 00:50:32.299 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:50:32.299 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:50:32.299 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d26137b165367e40fe52c420b2d0da13fa17774427f499a0 00:50:32.299 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:50:32.299 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:50:32.299 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.YVT 00:50:32.299 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.YVT 00:50:32.299 15:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.YVT 00:50:32.299 15:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:50:32.299 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:50:32.299 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:50:32.299 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:50:32.299 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:50:32.299 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:50:32.299 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:50:32.299 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4309332d6da644921c7ae612ad68ee46 00:50:32.299 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:50:32.299 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.nR1 00:50:32.299 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4309332d6da644921c7ae612ad68ee46 0 00:50:32.300 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4309332d6da644921c7ae612ad68ee46 0 00:50:32.300 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:50:32.300 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:50:32.300 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4309332d6da644921c7ae612ad68ee46 00:50:32.300 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:50:32.300 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:50:32.300 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.nR1 00:50:32.300 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.nR1 00:50:32.300 15:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.nR1 00:50:32.300 15:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:50:32.300 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:50:32.300 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:50:32.300 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:50:32.300 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:50:32.300 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:50:32.300 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:50:32.300 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3048462861cf738d389f8875c893e76b831aabc33fd2e137eadaf1557c897171 00:50:32.300 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:50:32.300 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.9ld 00:50:32.300 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3048462861cf738d389f8875c893e76b831aabc33fd2e137eadaf1557c897171 3 00:50:32.300 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3048462861cf738d389f8875c893e76b831aabc33fd2e137eadaf1557c897171 3 00:50:32.300 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:50:32.300 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:50:32.300 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3048462861cf738d389f8875c893e76b831aabc33fd2e137eadaf1557c897171 00:50:32.300 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:50:32.300 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:50:32.559 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.9ld 00:50:32.559 15:03:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.9ld 00:50:32.559 15:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.9ld 00:50:32.559 15:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:50:32.559 15:03:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 109352 00:50:32.559 15:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 109352 ']' 00:50:32.559 15:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:50:32.559 15:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:50:32.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:50:32.560 15:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:50:32.560 15:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:50:32.560 15:03:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:32.560 15:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:50:32.560 15:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:50:32.560 15:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:50:32.560 15:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.1Yp 00:50:32.560 15:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:32.560 15:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:32.560 15:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:32.560 15:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.7Yh ]] 00:50:32.560 15:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.7Yh 00:50:32.560 15:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:32.560 15:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:32.560 15:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:32.560 15:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:50:32.560 15:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.sru 00:50:32.560 15:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:32.560 15:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:32.818 15:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:32.818 15:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.s1y ]] 00:50:32.818 15:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.s1y 00:50:32.818 15:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:32.818 15:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:32.818 15:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:32.818 15:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:50:32.818 15:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.YRY 00:50:32.818 15:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:32.818 15:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:32.818 15:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:32.818 15:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.plU ]] 00:50:32.818 15:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.plU 00:50:32.818 15:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:32.818 15:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:32.818 15:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:32.818 15:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:50:32.818 15:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.YVT 00:50:32.818 15:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:32.818 15:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:32.818 15:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:32.818 15:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.nR1 ]] 00:50:32.818 15:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.nR1 00:50:32.818 15:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:32.818 15:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:32.818 15:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:32.818 15:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:50:32.818 15:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.9ld 00:50:32.818 15:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:32.818 15:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:32.818 15:03:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:32.818 15:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:50:32.818 15:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:50:32.818 15:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:50:32.818 15:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:32.818 15:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:32.818 15:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:32.819 15:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:32.819 15:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:32.819 15:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:32.819 15:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:32.819 15:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:32.819 15:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:32.819 15:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:32.819 15:03:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:50:32.819 15:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:50:32.819 15:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:50:32.819 15:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:50:32.819 15:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:50:32.819 15:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:50:32.819 15:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:50:32.819 15:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:50:32.819 15:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:50:32.819 15:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:50:32.819 15:03:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:50:33.385 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:50:33.385 Waiting for block devices as requested 00:50:33.385 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:50:33.385 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:50:34.341 15:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:50:34.341 15:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:50:34.341 15:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:50:34.341 15:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:50:34.341 15:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:50:34.341 15:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:50:34.341 15:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:50:34.341 15:03:53 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:50:34.341 15:03:53 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:50:34.341 No valid GPT data, bailing 00:50:34.341 15:03:53 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:50:34.341 15:03:53 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:50:34.341 15:03:53 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:50:34.341 15:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:50:34.341 15:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:50:34.341 15:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:50:34.341 15:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:50:34.341 15:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:50:34.341 15:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:50:34.341 15:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:50:34.341 15:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:50:34.341 15:03:53 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:50:34.341 15:03:53 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:50:34.341 No valid GPT data, bailing 00:50:34.341 15:03:53 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:50:34.341 15:03:53 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:50:34.341 15:03:53 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:50:34.341 15:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:50:34.341 15:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:50:34.341 15:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:50:34.341 15:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:50:34.341 15:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:50:34.341 15:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:50:34.341 15:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:50:34.341 15:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:50:34.341 15:03:53 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:50:34.341 15:03:53 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:50:34.341 No valid GPT data, bailing 00:50:34.341 15:03:53 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:50:34.341 15:03:53 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:50:34.341 15:03:53 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:50:34.341 15:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:50:34.341 15:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:50:34.341 15:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:50:34.341 15:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:50:34.341 15:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:50:34.341 15:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:50:34.341 15:03:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:50:34.341 15:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:50:34.341 15:03:53 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:50:34.341 15:03:53 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:50:34.341 No valid GPT data, bailing 00:50:34.341 15:03:53 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:50:34.600 15:03:53 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:50:34.600 15:03:53 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:50:34.600 15:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:50:34.600 15:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:50:34.601 15:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:50:34.601 15:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:50:34.601 15:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:50:34.601 15:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:50:34.601 15:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:50:34.601 15:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:50:34.601 15:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:50:34.601 15:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:50:34.601 15:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:50:34.601 15:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:50:34.601 15:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:50:34.601 15:03:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:50:34.601 15:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -a 10.0.0.1 -t tcp -s 4420 00:50:34.601 00:50:34.601 Discovery Log Number of Records 2, Generation counter 2 00:50:34.601 =====Discovery Log Entry 0====== 00:50:34.601 trtype: tcp 00:50:34.601 adrfam: ipv4 00:50:34.601 subtype: current discovery subsystem 00:50:34.601 treq: not specified, sq flow control disable supported 00:50:34.601 portid: 1 00:50:34.601 trsvcid: 4420 00:50:34.601 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:50:34.601 traddr: 10.0.0.1 00:50:34.601 eflags: none 00:50:34.601 sectype: none 00:50:34.601 =====Discovery Log Entry 1====== 00:50:34.601 trtype: tcp 00:50:34.601 adrfam: ipv4 00:50:34.601 subtype: nvme subsystem 00:50:34.601 treq: not specified, sq flow control disable supported 00:50:34.601 portid: 1 00:50:34.601 trsvcid: 4420 00:50:34.601 subnqn: nqn.2024-02.io.spdk:cnode0 00:50:34.601 traddr: 10.0.0.1 00:50:34.601 eflags: none 00:50:34.601 sectype: none 00:50:34.601 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:50:34.601 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:50:34.601 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:50:34.601 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:50:34.601 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:34.601 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:50:34.601 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:50:34.601 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:50:34.601 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmUyZGQ2MWQwN2I0ZTA4YjI1MTBlYzgwMDEwNjE4Njk0YjgxMjIxNTQ2ZDdmNGVkEUy1Mw==: 00:50:34.601 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGNiZDU3NmQwOWEyNTY1YzQzODllMTEwOWRlODRmZDZkZDk1ZDExMDQzMzk4NDMxPNntjA==: 00:50:34.601 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:50:34.601 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:50:34.601 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmUyZGQ2MWQwN2I0ZTA4YjI1MTBlYzgwMDEwNjE4Njk0YjgxMjIxNTQ2ZDdmNGVkEUy1Mw==: 00:50:34.601 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGNiZDU3NmQwOWEyNTY1YzQzODllMTEwOWRlODRmZDZkZDk1ZDExMDQzMzk4NDMxPNntjA==: ]] 00:50:34.601 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGNiZDU3NmQwOWEyNTY1YzQzODllMTEwOWRlODRmZDZkZDk1ZDExMDQzMzk4NDMxPNntjA==: 00:50:34.601 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:50:34.601 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:50:34.601 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:50:34.601 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:50:34.601 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:50:34.601 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:34.601 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:50:34.601 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:50:34.601 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:50:34.601 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:34.601 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:50:34.601 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:34.601 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:34.601 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:34.601 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:34.601 15:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:34.601 15:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:34.601 15:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:34.601 15:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:34.601 15:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:34.601 15:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:34.601 15:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:34.601 15:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:34.601 15:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:34.601 15:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:34.601 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:34.601 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:34.601 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:34.861 nvme0n1 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg0NmU4NjcyY2Q4NmUyNDMyYjNkOTI2NWM5YjYxNmNMxaZR: 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2ZiYzM2ZGM3Mjk4ZDEzOGE3ZWQ4Y2MzYTFhZDI0MTRmZmFjNDMxYWU0ZjI5YjQyMGZiMDYyNGMwOThlZjg3MB6nxQY=: 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg0NmU4NjcyY2Q4NmUyNDMyYjNkOTI2NWM5YjYxNmNMxaZR: 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2ZiYzM2ZGM3Mjk4ZDEzOGE3ZWQ4Y2MzYTFhZDI0MTRmZmFjNDMxYWU0ZjI5YjQyMGZiMDYyNGMwOThlZjg3MB6nxQY=: ]] 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2ZiYzM2ZGM3Mjk4ZDEzOGE3ZWQ4Y2MzYTFhZDI0MTRmZmFjNDMxYWU0ZjI5YjQyMGZiMDYyNGMwOThlZjg3MB6nxQY=: 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:34.861 nvme0n1 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:34.861 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:35.121 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:35.121 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:35.121 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:35.121 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:35.121 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:35.121 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmUyZGQ2MWQwN2I0ZTA4YjI1MTBlYzgwMDEwNjE4Njk0YjgxMjIxNTQ2ZDdmNGVkEUy1Mw==: 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGNiZDU3NmQwOWEyNTY1YzQzODllMTEwOWRlODRmZDZkZDk1ZDExMDQzMzk4NDMxPNntjA==: 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmUyZGQ2MWQwN2I0ZTA4YjI1MTBlYzgwMDEwNjE4Njk0YjgxMjIxNTQ2ZDdmNGVkEUy1Mw==: 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGNiZDU3NmQwOWEyNTY1YzQzODllMTEwOWRlODRmZDZkZDk1ZDExMDQzMzk4NDMxPNntjA==: ]] 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGNiZDU3NmQwOWEyNTY1YzQzODllMTEwOWRlODRmZDZkZDk1ZDExMDQzMzk4NDMxPNntjA==: 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:35.122 nvme0n1 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWYzYzg1NWRhYTJhY2U1YmU1N2IwNTY3YmRkN2RkNzFJkkB2: 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQ2YzFjZWU0ZmJkMzY1MjQxZTQ0ZWE3YWM5MjBlMDYXUHKG: 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWYzYzg1NWRhYTJhY2U1YmU1N2IwNTY3YmRkN2RkNzFJkkB2: 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQ2YzFjZWU0ZmJkMzY1MjQxZTQ0ZWE3YWM5MjBlMDYXUHKG: ]] 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQ2YzFjZWU0ZmJkMzY1MjQxZTQ0ZWE3YWM5MjBlMDYXUHKG: 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:35.122 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:35.382 nvme0n1 00:50:35.382 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:35.382 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:35.382 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:35.382 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:35.382 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:35.382 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:35.382 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:35.382 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:35.382 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:35.382 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:35.382 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:35.382 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:35.382 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:50:35.382 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:35.382 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:50:35.382 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:50:35.382 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:50:35.382 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDI2MTM3YjE2NTM2N2U0MGZlNTJjNDIwYjJkMGRhMTNmYTE3Nzc0NDI3ZjQ5OWEw15V7Ew==: 00:50:35.382 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDMwOTMzMmQ2ZGE2NDQ5MjFjN2FlNjEyYWQ2OGVlNDaVKlOw: 00:50:35.382 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:50:35.382 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:50:35.382 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDI2MTM3YjE2NTM2N2U0MGZlNTJjNDIwYjJkMGRhMTNmYTE3Nzc0NDI3ZjQ5OWEw15V7Ew==: 00:50:35.382 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDMwOTMzMmQ2ZGE2NDQ5MjFjN2FlNjEyYWQ2OGVlNDaVKlOw: ]] 00:50:35.382 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDMwOTMzMmQ2ZGE2NDQ5MjFjN2FlNjEyYWQ2OGVlNDaVKlOw: 00:50:35.382 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:50:35.382 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:35.382 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:50:35.382 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:50:35.382 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:50:35.382 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:35.382 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:50:35.382 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:35.382 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:35.382 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:35.382 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:35.382 15:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:35.382 15:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:35.382 15:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:35.382 15:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:35.382 15:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:35.382 15:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:35.382 15:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:35.382 15:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:35.383 15:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:35.383 15:03:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:35.383 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:50:35.383 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:35.383 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:35.383 nvme0n1 00:50:35.383 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:35.383 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:35.383 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:35.383 15:03:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:35.383 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:35.383 15:03:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:35.642 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:35.642 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:35.642 15:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:35.642 15:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:35.642 15:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:35.642 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:35.642 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:50:35.642 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:35.642 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:50:35.642 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:50:35.642 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:50:35.642 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzA0ODQ2Mjg2MWNmNzM4ZDM4OWY4ODc1Yzg5M2U3NmI4MzFhYWJjMzNmZDJlMTM3ZWFkYWYxNTU3Yzg5NzE3MWUIids=: 00:50:35.642 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:50:35.642 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:50:35.642 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:50:35.642 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzA0ODQ2Mjg2MWNmNzM4ZDM4OWY4ODc1Yzg5M2U3NmI4MzFhYWJjMzNmZDJlMTM3ZWFkYWYxNTU3Yzg5NzE3MWUIids=: 00:50:35.642 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:50:35.642 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:50:35.642 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:35.642 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:50:35.642 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:50:35.642 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:50:35.643 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:35.643 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:50:35.643 15:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:35.643 15:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:35.643 15:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:35.643 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:35.643 15:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:35.643 15:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:35.643 15:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:35.643 15:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:35.643 15:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:35.643 15:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:35.643 15:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:35.643 15:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:35.643 15:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:35.643 15:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:35.643 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:50:35.643 15:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:35.643 15:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:35.643 nvme0n1 00:50:35.643 15:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:35.643 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:35.643 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:35.643 15:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:35.643 15:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:35.643 15:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:35.643 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:35.643 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:35.643 15:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:35.643 15:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:35.643 15:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:35.643 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:50:35.643 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:35.643 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:50:35.643 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:35.643 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:50:35.643 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:50:35.643 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:50:35.643 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg0NmU4NjcyY2Q4NmUyNDMyYjNkOTI2NWM5YjYxNmNMxaZR: 00:50:35.643 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2ZiYzM2ZGM3Mjk4ZDEzOGE3ZWQ4Y2MzYTFhZDI0MTRmZmFjNDMxYWU0ZjI5YjQyMGZiMDYyNGMwOThlZjg3MB6nxQY=: 00:50:35.643 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:50:35.643 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:50:35.903 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg0NmU4NjcyY2Q4NmUyNDMyYjNkOTI2NWM5YjYxNmNMxaZR: 00:50:35.903 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2ZiYzM2ZGM3Mjk4ZDEzOGE3ZWQ4Y2MzYTFhZDI0MTRmZmFjNDMxYWU0ZjI5YjQyMGZiMDYyNGMwOThlZjg3MB6nxQY=: ]] 00:50:35.903 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2ZiYzM2ZGM3Mjk4ZDEzOGE3ZWQ4Y2MzYTFhZDI0MTRmZmFjNDMxYWU0ZjI5YjQyMGZiMDYyNGMwOThlZjg3MB6nxQY=: 00:50:35.903 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:50:35.903 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:35.903 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:50:35.903 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:50:35.903 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:50:35.903 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:35.903 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:50:35.903 15:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:35.903 15:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:35.903 15:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:35.903 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:35.903 15:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:35.903 15:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:35.903 15:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:35.903 15:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:35.903 15:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:35.903 15:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:35.903 15:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:35.903 15:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:35.903 15:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:35.903 15:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:35.903 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:35.903 15:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:35.903 15:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:36.163 nvme0n1 00:50:36.163 15:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:36.163 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:36.163 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:36.163 15:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:36.163 15:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:36.163 15:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:36.163 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:36.163 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:36.163 15:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:36.163 15:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:36.163 15:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:36.163 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:36.163 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:50:36.163 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:36.163 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:50:36.163 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:50:36.163 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:50:36.163 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmUyZGQ2MWQwN2I0ZTA4YjI1MTBlYzgwMDEwNjE4Njk0YjgxMjIxNTQ2ZDdmNGVkEUy1Mw==: 00:50:36.163 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGNiZDU3NmQwOWEyNTY1YzQzODllMTEwOWRlODRmZDZkZDk1ZDExMDQzMzk4NDMxPNntjA==: 00:50:36.163 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:50:36.163 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:50:36.163 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmUyZGQ2MWQwN2I0ZTA4YjI1MTBlYzgwMDEwNjE4Njk0YjgxMjIxNTQ2ZDdmNGVkEUy1Mw==: 00:50:36.163 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGNiZDU3NmQwOWEyNTY1YzQzODllMTEwOWRlODRmZDZkZDk1ZDExMDQzMzk4NDMxPNntjA==: ]] 00:50:36.163 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGNiZDU3NmQwOWEyNTY1YzQzODllMTEwOWRlODRmZDZkZDk1ZDExMDQzMzk4NDMxPNntjA==: 00:50:36.163 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:50:36.163 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:36.163 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:50:36.163 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:50:36.163 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:50:36.163 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:36.163 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:50:36.163 15:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:36.163 15:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:36.163 15:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:36.163 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:36.163 15:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:36.163 15:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:36.163 15:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:36.163 15:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:36.163 15:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:36.163 15:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:36.164 15:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:36.164 15:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:36.164 15:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:36.164 15:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:36.164 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:36.164 15:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:36.164 15:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:36.164 nvme0n1 00:50:36.164 15:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:36.164 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:36.164 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:36.164 15:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:36.164 15:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:36.164 15:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:36.423 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:36.423 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:36.423 15:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:36.423 15:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:36.423 15:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:36.423 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:36.423 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:50:36.423 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:36.423 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:50:36.423 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:50:36.423 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:50:36.423 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWYzYzg1NWRhYTJhY2U1YmU1N2IwNTY3YmRkN2RkNzFJkkB2: 00:50:36.423 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQ2YzFjZWU0ZmJkMzY1MjQxZTQ0ZWE3YWM5MjBlMDYXUHKG: 00:50:36.423 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:50:36.423 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:50:36.423 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWYzYzg1NWRhYTJhY2U1YmU1N2IwNTY3YmRkN2RkNzFJkkB2: 00:50:36.423 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQ2YzFjZWU0ZmJkMzY1MjQxZTQ0ZWE3YWM5MjBlMDYXUHKG: ]] 00:50:36.423 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQ2YzFjZWU0ZmJkMzY1MjQxZTQ0ZWE3YWM5MjBlMDYXUHKG: 00:50:36.423 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:50:36.423 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:36.423 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:50:36.423 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:50:36.423 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:50:36.423 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:36.423 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:50:36.423 15:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:36.423 15:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:36.423 15:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:36.423 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:36.423 15:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:36.423 15:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:36.423 15:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:36.423 15:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:36.423 15:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:36.423 15:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:36.423 15:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:36.423 15:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:36.423 15:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:36.423 15:03:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:36.423 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:36.423 15:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:36.423 15:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:36.423 nvme0n1 00:50:36.423 15:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:36.424 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:36.424 15:03:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:36.424 15:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:36.424 15:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:36.424 15:03:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:36.424 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:36.424 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:36.424 15:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:36.424 15:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:36.424 15:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:36.424 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:36.424 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:50:36.424 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:36.424 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:50:36.424 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:50:36.424 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:50:36.424 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDI2MTM3YjE2NTM2N2U0MGZlNTJjNDIwYjJkMGRhMTNmYTE3Nzc0NDI3ZjQ5OWEw15V7Ew==: 00:50:36.424 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDMwOTMzMmQ2ZGE2NDQ5MjFjN2FlNjEyYWQ2OGVlNDaVKlOw: 00:50:36.424 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:50:36.424 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:50:36.424 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDI2MTM3YjE2NTM2N2U0MGZlNTJjNDIwYjJkMGRhMTNmYTE3Nzc0NDI3ZjQ5OWEw15V7Ew==: 00:50:36.424 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDMwOTMzMmQ2ZGE2NDQ5MjFjN2FlNjEyYWQ2OGVlNDaVKlOw: ]] 00:50:36.424 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDMwOTMzMmQ2ZGE2NDQ5MjFjN2FlNjEyYWQ2OGVlNDaVKlOw: 00:50:36.424 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:50:36.424 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:36.424 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:50:36.424 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:50:36.424 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:50:36.424 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:36.424 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:50:36.424 15:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:36.424 15:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:36.424 15:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:36.424 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:36.424 15:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:36.424 15:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:36.424 15:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:36.424 15:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:36.424 15:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:36.424 15:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:36.424 15:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:36.424 15:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:36.424 15:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:36.424 15:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:36.424 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:50:36.424 15:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:36.424 15:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:36.683 nvme0n1 00:50:36.683 15:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:36.683 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:36.683 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:36.683 15:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:36.683 15:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:36.683 15:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:36.683 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:36.683 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:36.683 15:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:36.683 15:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:36.683 15:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:36.683 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:36.683 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:50:36.683 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:36.683 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:50:36.683 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:50:36.683 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:50:36.683 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzA0ODQ2Mjg2MWNmNzM4ZDM4OWY4ODc1Yzg5M2U3NmI4MzFhYWJjMzNmZDJlMTM3ZWFkYWYxNTU3Yzg5NzE3MWUIids=: 00:50:36.683 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:50:36.683 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:50:36.683 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:50:36.683 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzA0ODQ2Mjg2MWNmNzM4ZDM4OWY4ODc1Yzg5M2U3NmI4MzFhYWJjMzNmZDJlMTM3ZWFkYWYxNTU3Yzg5NzE3MWUIids=: 00:50:36.683 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:50:36.683 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:50:36.683 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:36.683 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:50:36.683 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:50:36.683 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:50:36.683 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:36.683 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:50:36.683 15:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:36.683 15:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:36.683 15:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:36.683 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:36.683 15:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:36.683 15:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:36.683 15:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:36.683 15:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:36.683 15:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:36.683 15:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:36.683 15:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:36.683 15:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:36.683 15:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:36.683 15:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:36.683 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:50:36.683 15:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:36.683 15:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:36.942 nvme0n1 00:50:36.943 15:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:36.943 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:36.943 15:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:36.943 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:36.943 15:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:36.943 15:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:36.943 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:36.943 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:36.943 15:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:36.943 15:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:36.943 15:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:36.943 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:50:36.943 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:36.943 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:50:36.943 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:36.943 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:50:36.943 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:50:36.943 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:50:36.943 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg0NmU4NjcyY2Q4NmUyNDMyYjNkOTI2NWM5YjYxNmNMxaZR: 00:50:36.943 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2ZiYzM2ZGM3Mjk4ZDEzOGE3ZWQ4Y2MzYTFhZDI0MTRmZmFjNDMxYWU0ZjI5YjQyMGZiMDYyNGMwOThlZjg3MB6nxQY=: 00:50:36.943 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:50:36.943 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:50:37.512 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg0NmU4NjcyY2Q4NmUyNDMyYjNkOTI2NWM5YjYxNmNMxaZR: 00:50:37.512 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2ZiYzM2ZGM3Mjk4ZDEzOGE3ZWQ4Y2MzYTFhZDI0MTRmZmFjNDMxYWU0ZjI5YjQyMGZiMDYyNGMwOThlZjg3MB6nxQY=: ]] 00:50:37.512 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2ZiYzM2ZGM3Mjk4ZDEzOGE3ZWQ4Y2MzYTFhZDI0MTRmZmFjNDMxYWU0ZjI5YjQyMGZiMDYyNGMwOThlZjg3MB6nxQY=: 00:50:37.512 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:50:37.512 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:37.512 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:50:37.512 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:50:37.512 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:50:37.512 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:37.512 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:50:37.512 15:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:37.512 15:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:37.512 15:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:37.512 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:37.512 15:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:37.512 15:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:37.512 15:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:37.512 15:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:37.512 15:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:37.512 15:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:37.512 15:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:37.512 15:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:37.512 15:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:37.512 15:03:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:37.512 15:03:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:37.512 15:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:37.512 15:03:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:37.512 nvme0n1 00:50:37.512 15:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:37.512 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:37.512 15:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:37.512 15:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:37.512 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:37.512 15:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:37.512 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:37.512 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:37.512 15:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:37.512 15:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:37.512 15:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:37.512 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:37.512 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:50:37.512 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:37.512 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:50:37.512 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:50:37.512 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:50:37.512 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmUyZGQ2MWQwN2I0ZTA4YjI1MTBlYzgwMDEwNjE4Njk0YjgxMjIxNTQ2ZDdmNGVkEUy1Mw==: 00:50:37.512 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGNiZDU3NmQwOWEyNTY1YzQzODllMTEwOWRlODRmZDZkZDk1ZDExMDQzMzk4NDMxPNntjA==: 00:50:37.512 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:50:37.512 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:50:37.512 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmUyZGQ2MWQwN2I0ZTA4YjI1MTBlYzgwMDEwNjE4Njk0YjgxMjIxNTQ2ZDdmNGVkEUy1Mw==: 00:50:37.512 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGNiZDU3NmQwOWEyNTY1YzQzODllMTEwOWRlODRmZDZkZDk1ZDExMDQzMzk4NDMxPNntjA==: ]] 00:50:37.512 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGNiZDU3NmQwOWEyNTY1YzQzODllMTEwOWRlODRmZDZkZDk1ZDExMDQzMzk4NDMxPNntjA==: 00:50:37.512 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:50:37.512 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:37.512 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:50:37.512 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:50:37.512 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:50:37.512 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:37.512 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:50:37.512 15:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:37.512 15:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:37.512 15:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:37.512 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:37.512 15:03:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:37.512 15:03:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:37.512 15:03:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:37.512 15:03:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:37.512 15:03:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:37.512 15:03:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:37.512 15:03:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:37.512 15:03:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:37.512 15:03:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:37.512 15:03:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:37.512 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:37.512 15:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:37.512 15:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:37.771 nvme0n1 00:50:37.771 15:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:37.771 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:37.771 15:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:37.771 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:37.771 15:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:37.771 15:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:37.771 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:37.771 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:37.771 15:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:37.771 15:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:37.771 15:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:37.771 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:37.771 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:50:37.771 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:37.771 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:50:37.771 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:50:37.771 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:50:37.771 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWYzYzg1NWRhYTJhY2U1YmU1N2IwNTY3YmRkN2RkNzFJkkB2: 00:50:37.771 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQ2YzFjZWU0ZmJkMzY1MjQxZTQ0ZWE3YWM5MjBlMDYXUHKG: 00:50:37.771 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:50:37.771 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:50:37.771 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWYzYzg1NWRhYTJhY2U1YmU1N2IwNTY3YmRkN2RkNzFJkkB2: 00:50:37.771 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQ2YzFjZWU0ZmJkMzY1MjQxZTQ0ZWE3YWM5MjBlMDYXUHKG: ]] 00:50:37.771 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQ2YzFjZWU0ZmJkMzY1MjQxZTQ0ZWE3YWM5MjBlMDYXUHKG: 00:50:37.771 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:50:37.771 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:37.771 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:50:37.771 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:50:37.771 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:50:37.771 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:37.771 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:50:37.771 15:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:37.771 15:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:37.771 15:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:37.771 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:37.771 15:03:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:37.771 15:03:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:37.771 15:03:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:37.771 15:03:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:37.771 15:03:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:37.771 15:03:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:37.771 15:03:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:37.771 15:03:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:37.771 15:03:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:37.771 15:03:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:37.771 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:37.772 15:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:37.772 15:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:38.030 nvme0n1 00:50:38.030 15:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:38.030 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:38.030 15:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:38.030 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:38.030 15:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:38.030 15:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:38.030 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:38.030 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:38.030 15:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:38.030 15:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:38.030 15:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:38.030 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:38.030 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:50:38.030 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:38.030 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:50:38.030 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:50:38.031 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:50:38.031 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDI2MTM3YjE2NTM2N2U0MGZlNTJjNDIwYjJkMGRhMTNmYTE3Nzc0NDI3ZjQ5OWEw15V7Ew==: 00:50:38.031 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDMwOTMzMmQ2ZGE2NDQ5MjFjN2FlNjEyYWQ2OGVlNDaVKlOw: 00:50:38.031 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:50:38.031 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:50:38.031 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDI2MTM3YjE2NTM2N2U0MGZlNTJjNDIwYjJkMGRhMTNmYTE3Nzc0NDI3ZjQ5OWEw15V7Ew==: 00:50:38.031 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDMwOTMzMmQ2ZGE2NDQ5MjFjN2FlNjEyYWQ2OGVlNDaVKlOw: ]] 00:50:38.031 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDMwOTMzMmQ2ZGE2NDQ5MjFjN2FlNjEyYWQ2OGVlNDaVKlOw: 00:50:38.031 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:50:38.031 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:38.031 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:50:38.031 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:50:38.031 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:50:38.031 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:38.031 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:50:38.031 15:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:38.031 15:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:38.031 15:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:38.031 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:38.031 15:03:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:38.031 15:03:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:38.031 15:03:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:38.031 15:03:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:38.031 15:03:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:38.031 15:03:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:38.031 15:03:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:38.031 15:03:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:38.031 15:03:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:38.031 15:03:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:38.031 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:50:38.031 15:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:38.031 15:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:38.290 nvme0n1 00:50:38.290 15:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:38.290 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:38.290 15:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:38.290 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:38.290 15:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:38.290 15:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:38.290 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:38.290 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:38.290 15:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:38.290 15:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:38.290 15:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:38.290 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:38.290 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:50:38.290 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:38.290 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:50:38.290 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:50:38.290 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:50:38.290 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzA0ODQ2Mjg2MWNmNzM4ZDM4OWY4ODc1Yzg5M2U3NmI4MzFhYWJjMzNmZDJlMTM3ZWFkYWYxNTU3Yzg5NzE3MWUIids=: 00:50:38.290 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:50:38.290 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:50:38.290 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:50:38.290 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzA0ODQ2Mjg2MWNmNzM4ZDM4OWY4ODc1Yzg5M2U3NmI4MzFhYWJjMzNmZDJlMTM3ZWFkYWYxNTU3Yzg5NzE3MWUIids=: 00:50:38.290 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:50:38.290 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:50:38.290 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:38.290 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:50:38.290 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:50:38.290 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:50:38.290 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:38.290 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:50:38.290 15:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:38.290 15:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:38.290 15:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:38.290 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:38.290 15:03:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:38.290 15:03:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:38.291 15:03:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:38.291 15:03:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:38.291 15:03:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:38.291 15:03:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:38.291 15:03:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:38.291 15:03:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:38.291 15:03:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:38.291 15:03:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:38.291 15:03:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:50:38.291 15:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:38.291 15:03:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:38.550 nvme0n1 00:50:38.550 15:03:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:38.550 15:03:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:38.550 15:03:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:38.550 15:03:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:38.550 15:03:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:38.550 15:03:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:38.550 15:03:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:38.550 15:03:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:38.550 15:03:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:38.550 15:03:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:38.550 15:03:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:38.550 15:03:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:50:38.550 15:03:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:38.550 15:03:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:50:38.550 15:03:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:38.550 15:03:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:50:38.550 15:03:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:50:38.550 15:03:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:50:38.550 15:03:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg0NmU4NjcyY2Q4NmUyNDMyYjNkOTI2NWM5YjYxNmNMxaZR: 00:50:38.550 15:03:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2ZiYzM2ZGM3Mjk4ZDEzOGE3ZWQ4Y2MzYTFhZDI0MTRmZmFjNDMxYWU0ZjI5YjQyMGZiMDYyNGMwOThlZjg3MB6nxQY=: 00:50:38.550 15:03:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:50:38.550 15:03:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:50:39.964 15:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg0NmU4NjcyY2Q4NmUyNDMyYjNkOTI2NWM5YjYxNmNMxaZR: 00:50:39.964 15:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2ZiYzM2ZGM3Mjk4ZDEzOGE3ZWQ4Y2MzYTFhZDI0MTRmZmFjNDMxYWU0ZjI5YjQyMGZiMDYyNGMwOThlZjg3MB6nxQY=: ]] 00:50:39.964 15:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2ZiYzM2ZGM3Mjk4ZDEzOGE3ZWQ4Y2MzYTFhZDI0MTRmZmFjNDMxYWU0ZjI5YjQyMGZiMDYyNGMwOThlZjg3MB6nxQY=: 00:50:39.964 15:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:50:39.964 15:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:39.964 15:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:50:39.964 15:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:50:39.964 15:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:50:39.964 15:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:39.964 15:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:50:39.964 15:03:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:39.964 15:03:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:39.964 15:03:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:39.964 15:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:39.964 15:03:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:39.964 15:03:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:39.964 15:03:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:39.964 15:03:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:39.964 15:03:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:39.964 15:03:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:39.964 15:03:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:39.964 15:03:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:39.964 15:03:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:39.964 15:03:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:39.964 15:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:39.964 15:03:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:39.964 15:03:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:40.223 nvme0n1 00:50:40.223 15:03:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:40.223 15:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:40.223 15:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:40.223 15:03:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:40.223 15:03:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:40.223 15:03:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:40.223 15:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:40.223 15:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:40.223 15:03:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:40.223 15:03:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:40.223 15:03:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:40.223 15:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:40.223 15:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:50:40.223 15:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:40.223 15:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:50:40.223 15:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:50:40.223 15:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:50:40.223 15:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmUyZGQ2MWQwN2I0ZTA4YjI1MTBlYzgwMDEwNjE4Njk0YjgxMjIxNTQ2ZDdmNGVkEUy1Mw==: 00:50:40.223 15:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGNiZDU3NmQwOWEyNTY1YzQzODllMTEwOWRlODRmZDZkZDk1ZDExMDQzMzk4NDMxPNntjA==: 00:50:40.223 15:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:50:40.223 15:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:50:40.223 15:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmUyZGQ2MWQwN2I0ZTA4YjI1MTBlYzgwMDEwNjE4Njk0YjgxMjIxNTQ2ZDdmNGVkEUy1Mw==: 00:50:40.223 15:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGNiZDU3NmQwOWEyNTY1YzQzODllMTEwOWRlODRmZDZkZDk1ZDExMDQzMzk4NDMxPNntjA==: ]] 00:50:40.223 15:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGNiZDU3NmQwOWEyNTY1YzQzODllMTEwOWRlODRmZDZkZDk1ZDExMDQzMzk4NDMxPNntjA==: 00:50:40.223 15:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:50:40.223 15:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:40.223 15:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:50:40.223 15:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:50:40.223 15:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:50:40.223 15:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:40.223 15:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:50:40.223 15:03:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:40.223 15:03:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:40.223 15:03:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:40.223 15:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:40.223 15:03:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:40.223 15:03:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:40.223 15:03:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:40.223 15:03:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:40.223 15:03:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:40.223 15:03:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:40.223 15:03:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:40.223 15:03:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:40.223 15:03:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:40.223 15:03:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:40.224 15:03:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:40.224 15:03:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:40.224 15:03:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:40.483 nvme0n1 00:50:40.483 15:04:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:40.483 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:40.483 15:04:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:40.483 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:40.483 15:04:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:40.483 15:04:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:40.483 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:40.483 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:40.483 15:04:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:40.483 15:04:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:40.483 15:04:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:40.483 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:40.483 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:50:40.483 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:40.483 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:50:40.483 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:50:40.483 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:50:40.483 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWYzYzg1NWRhYTJhY2U1YmU1N2IwNTY3YmRkN2RkNzFJkkB2: 00:50:40.483 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQ2YzFjZWU0ZmJkMzY1MjQxZTQ0ZWE3YWM5MjBlMDYXUHKG: 00:50:40.483 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:50:40.483 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:50:40.483 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWYzYzg1NWRhYTJhY2U1YmU1N2IwNTY3YmRkN2RkNzFJkkB2: 00:50:40.483 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQ2YzFjZWU0ZmJkMzY1MjQxZTQ0ZWE3YWM5MjBlMDYXUHKG: ]] 00:50:40.483 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQ2YzFjZWU0ZmJkMzY1MjQxZTQ0ZWE3YWM5MjBlMDYXUHKG: 00:50:40.483 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:50:40.483 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:40.483 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:50:40.483 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:50:40.483 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:50:40.483 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:40.483 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:50:40.483 15:04:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:40.483 15:04:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:40.743 15:04:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:40.743 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:40.743 15:04:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:40.743 15:04:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:40.743 15:04:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:40.743 15:04:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:40.743 15:04:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:40.743 15:04:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:40.743 15:04:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:40.743 15:04:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:40.743 15:04:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:40.743 15:04:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:40.743 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:40.743 15:04:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:40.743 15:04:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:41.003 nvme0n1 00:50:41.003 15:04:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:41.003 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:41.003 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:41.003 15:04:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:41.003 15:04:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:41.003 15:04:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:41.003 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:41.003 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:41.003 15:04:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:41.003 15:04:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:41.003 15:04:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:41.003 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:41.003 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:50:41.003 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:41.003 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:50:41.003 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:50:41.003 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:50:41.003 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDI2MTM3YjE2NTM2N2U0MGZlNTJjNDIwYjJkMGRhMTNmYTE3Nzc0NDI3ZjQ5OWEw15V7Ew==: 00:50:41.003 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDMwOTMzMmQ2ZGE2NDQ5MjFjN2FlNjEyYWQ2OGVlNDaVKlOw: 00:50:41.003 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:50:41.003 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:50:41.003 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDI2MTM3YjE2NTM2N2U0MGZlNTJjNDIwYjJkMGRhMTNmYTE3Nzc0NDI3ZjQ5OWEw15V7Ew==: 00:50:41.003 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDMwOTMzMmQ2ZGE2NDQ5MjFjN2FlNjEyYWQ2OGVlNDaVKlOw: ]] 00:50:41.003 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDMwOTMzMmQ2ZGE2NDQ5MjFjN2FlNjEyYWQ2OGVlNDaVKlOw: 00:50:41.003 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:50:41.003 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:41.003 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:50:41.003 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:50:41.004 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:50:41.004 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:41.004 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:50:41.004 15:04:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:41.004 15:04:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:41.004 15:04:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:41.004 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:41.004 15:04:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:41.004 15:04:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:41.004 15:04:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:41.004 15:04:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:41.004 15:04:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:41.004 15:04:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:41.004 15:04:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:41.004 15:04:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:41.004 15:04:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:41.004 15:04:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:41.004 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:50:41.004 15:04:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:41.004 15:04:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:41.264 nvme0n1 00:50:41.264 15:04:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:41.264 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:41.264 15:04:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:41.264 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:41.264 15:04:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:41.264 15:04:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:41.264 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:41.264 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:41.264 15:04:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:41.264 15:04:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:41.264 15:04:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:41.264 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:41.264 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:50:41.264 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:41.264 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:50:41.264 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:50:41.264 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:50:41.264 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzA0ODQ2Mjg2MWNmNzM4ZDM4OWY4ODc1Yzg5M2U3NmI4MzFhYWJjMzNmZDJlMTM3ZWFkYWYxNTU3Yzg5NzE3MWUIids=: 00:50:41.264 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:50:41.264 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:50:41.264 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:50:41.264 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzA0ODQ2Mjg2MWNmNzM4ZDM4OWY4ODc1Yzg5M2U3NmI4MzFhYWJjMzNmZDJlMTM3ZWFkYWYxNTU3Yzg5NzE3MWUIids=: 00:50:41.264 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:50:41.264 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:50:41.264 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:41.264 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:50:41.264 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:50:41.264 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:50:41.264 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:41.264 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:50:41.264 15:04:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:41.264 15:04:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:41.264 15:04:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:41.264 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:41.264 15:04:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:41.264 15:04:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:41.264 15:04:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:41.264 15:04:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:41.264 15:04:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:41.264 15:04:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:41.264 15:04:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:41.264 15:04:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:41.264 15:04:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:41.265 15:04:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:41.265 15:04:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:50:41.265 15:04:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:41.265 15:04:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:41.834 nvme0n1 00:50:41.834 15:04:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:41.834 15:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:41.834 15:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:41.834 15:04:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:41.834 15:04:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:41.834 15:04:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:41.834 15:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:41.834 15:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:41.834 15:04:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:41.834 15:04:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:41.834 15:04:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:41.834 15:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:50:41.834 15:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:41.834 15:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:50:41.834 15:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:41.834 15:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:50:41.834 15:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:50:41.834 15:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:50:41.834 15:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg0NmU4NjcyY2Q4NmUyNDMyYjNkOTI2NWM5YjYxNmNMxaZR: 00:50:41.834 15:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2ZiYzM2ZGM3Mjk4ZDEzOGE3ZWQ4Y2MzYTFhZDI0MTRmZmFjNDMxYWU0ZjI5YjQyMGZiMDYyNGMwOThlZjg3MB6nxQY=: 00:50:41.834 15:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:50:41.834 15:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:50:41.834 15:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg0NmU4NjcyY2Q4NmUyNDMyYjNkOTI2NWM5YjYxNmNMxaZR: 00:50:41.834 15:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2ZiYzM2ZGM3Mjk4ZDEzOGE3ZWQ4Y2MzYTFhZDI0MTRmZmFjNDMxYWU0ZjI5YjQyMGZiMDYyNGMwOThlZjg3MB6nxQY=: ]] 00:50:41.834 15:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2ZiYzM2ZGM3Mjk4ZDEzOGE3ZWQ4Y2MzYTFhZDI0MTRmZmFjNDMxYWU0ZjI5YjQyMGZiMDYyNGMwOThlZjg3MB6nxQY=: 00:50:41.834 15:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:50:41.834 15:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:41.834 15:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:50:41.834 15:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:50:41.834 15:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:50:41.834 15:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:41.834 15:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:50:41.834 15:04:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:41.834 15:04:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:41.834 15:04:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:41.834 15:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:41.834 15:04:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:41.834 15:04:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:41.834 15:04:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:41.834 15:04:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:41.834 15:04:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:41.834 15:04:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:41.834 15:04:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:41.834 15:04:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:41.834 15:04:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:41.834 15:04:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:41.834 15:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:41.834 15:04:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:41.834 15:04:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:42.093 nvme0n1 00:50:42.093 15:04:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:42.354 15:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:42.354 15:04:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:42.354 15:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:42.354 15:04:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:42.354 15:04:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:42.354 15:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:42.354 15:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:42.354 15:04:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:42.354 15:04:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:42.354 15:04:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:42.354 15:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:42.354 15:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:50:42.354 15:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:42.354 15:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:50:42.354 15:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:50:42.354 15:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:50:42.354 15:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmUyZGQ2MWQwN2I0ZTA4YjI1MTBlYzgwMDEwNjE4Njk0YjgxMjIxNTQ2ZDdmNGVkEUy1Mw==: 00:50:42.354 15:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGNiZDU3NmQwOWEyNTY1YzQzODllMTEwOWRlODRmZDZkZDk1ZDExMDQzMzk4NDMxPNntjA==: 00:50:42.354 15:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:50:42.354 15:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:50:42.354 15:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmUyZGQ2MWQwN2I0ZTA4YjI1MTBlYzgwMDEwNjE4Njk0YjgxMjIxNTQ2ZDdmNGVkEUy1Mw==: 00:50:42.354 15:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGNiZDU3NmQwOWEyNTY1YzQzODllMTEwOWRlODRmZDZkZDk1ZDExMDQzMzk4NDMxPNntjA==: ]] 00:50:42.354 15:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGNiZDU3NmQwOWEyNTY1YzQzODllMTEwOWRlODRmZDZkZDk1ZDExMDQzMzk4NDMxPNntjA==: 00:50:42.354 15:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:50:42.354 15:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:42.354 15:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:50:42.354 15:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:50:42.354 15:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:50:42.354 15:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:42.354 15:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:50:42.354 15:04:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:42.354 15:04:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:42.354 15:04:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:42.354 15:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:42.354 15:04:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:42.354 15:04:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:42.354 15:04:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:42.354 15:04:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:42.354 15:04:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:42.354 15:04:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:42.354 15:04:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:42.354 15:04:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:42.354 15:04:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:42.354 15:04:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:42.354 15:04:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:42.354 15:04:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:42.354 15:04:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:42.924 nvme0n1 00:50:42.924 15:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:42.924 15:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:42.924 15:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:42.924 15:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:42.924 15:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:42.924 15:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:42.924 15:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:42.924 15:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:42.924 15:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:42.924 15:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:42.924 15:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:42.924 15:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:42.924 15:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:50:42.924 15:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:42.924 15:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:50:42.924 15:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:50:42.924 15:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:50:42.924 15:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWYzYzg1NWRhYTJhY2U1YmU1N2IwNTY3YmRkN2RkNzFJkkB2: 00:50:42.924 15:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQ2YzFjZWU0ZmJkMzY1MjQxZTQ0ZWE3YWM5MjBlMDYXUHKG: 00:50:42.924 15:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:50:42.924 15:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:50:42.924 15:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWYzYzg1NWRhYTJhY2U1YmU1N2IwNTY3YmRkN2RkNzFJkkB2: 00:50:42.924 15:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQ2YzFjZWU0ZmJkMzY1MjQxZTQ0ZWE3YWM5MjBlMDYXUHKG: ]] 00:50:42.924 15:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQ2YzFjZWU0ZmJkMzY1MjQxZTQ0ZWE3YWM5MjBlMDYXUHKG: 00:50:42.924 15:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:50:42.924 15:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:42.924 15:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:50:42.924 15:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:50:42.924 15:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:50:42.924 15:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:42.924 15:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:50:42.924 15:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:42.924 15:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:42.924 15:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:42.924 15:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:42.924 15:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:42.924 15:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:42.924 15:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:42.924 15:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:42.924 15:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:42.924 15:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:42.924 15:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:42.924 15:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:42.924 15:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:42.924 15:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:42.924 15:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:42.924 15:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:42.924 15:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:43.498 nvme0n1 00:50:43.498 15:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:43.498 15:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:43.498 15:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:43.498 15:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:43.498 15:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:43.498 15:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:43.498 15:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:43.498 15:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:43.498 15:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:43.498 15:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:43.498 15:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:43.498 15:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:43.498 15:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:50:43.498 15:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:43.498 15:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:50:43.498 15:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:50:43.498 15:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:50:43.498 15:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDI2MTM3YjE2NTM2N2U0MGZlNTJjNDIwYjJkMGRhMTNmYTE3Nzc0NDI3ZjQ5OWEw15V7Ew==: 00:50:43.498 15:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDMwOTMzMmQ2ZGE2NDQ5MjFjN2FlNjEyYWQ2OGVlNDaVKlOw: 00:50:43.498 15:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:50:43.498 15:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:50:43.498 15:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDI2MTM3YjE2NTM2N2U0MGZlNTJjNDIwYjJkMGRhMTNmYTE3Nzc0NDI3ZjQ5OWEw15V7Ew==: 00:50:43.498 15:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDMwOTMzMmQ2ZGE2NDQ5MjFjN2FlNjEyYWQ2OGVlNDaVKlOw: ]] 00:50:43.498 15:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDMwOTMzMmQ2ZGE2NDQ5MjFjN2FlNjEyYWQ2OGVlNDaVKlOw: 00:50:43.498 15:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:50:43.498 15:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:43.498 15:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:50:43.498 15:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:50:43.498 15:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:50:43.498 15:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:43.498 15:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:50:43.498 15:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:43.498 15:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:43.498 15:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:43.498 15:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:43.498 15:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:43.498 15:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:43.498 15:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:43.498 15:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:43.498 15:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:43.498 15:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:43.498 15:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:43.498 15:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:43.498 15:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:43.498 15:04:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:43.498 15:04:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:50:43.498 15:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:43.498 15:04:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:44.070 nvme0n1 00:50:44.070 15:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:44.070 15:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:44.070 15:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:44.070 15:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:44.070 15:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:44.070 15:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:44.070 15:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:44.070 15:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:44.070 15:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:44.070 15:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:44.070 15:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:44.070 15:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:44.070 15:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:50:44.070 15:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:44.070 15:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:50:44.070 15:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:50:44.070 15:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:50:44.070 15:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzA0ODQ2Mjg2MWNmNzM4ZDM4OWY4ODc1Yzg5M2U3NmI4MzFhYWJjMzNmZDJlMTM3ZWFkYWYxNTU3Yzg5NzE3MWUIids=: 00:50:44.070 15:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:50:44.070 15:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:50:44.070 15:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:50:44.070 15:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzA0ODQ2Mjg2MWNmNzM4ZDM4OWY4ODc1Yzg5M2U3NmI4MzFhYWJjMzNmZDJlMTM3ZWFkYWYxNTU3Yzg5NzE3MWUIids=: 00:50:44.070 15:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:50:44.070 15:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:50:44.070 15:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:44.070 15:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:50:44.070 15:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:50:44.070 15:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:50:44.070 15:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:44.070 15:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:50:44.070 15:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:44.070 15:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:44.070 15:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:44.070 15:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:44.070 15:04:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:44.070 15:04:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:44.070 15:04:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:44.070 15:04:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:44.070 15:04:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:44.070 15:04:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:44.070 15:04:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:44.070 15:04:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:44.070 15:04:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:44.070 15:04:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:44.070 15:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:50:44.070 15:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:44.070 15:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:44.640 nvme0n1 00:50:44.640 15:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:44.640 15:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:44.640 15:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:44.640 15:04:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:44.640 15:04:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg0NmU4NjcyY2Q4NmUyNDMyYjNkOTI2NWM5YjYxNmNMxaZR: 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2ZiYzM2ZGM3Mjk4ZDEzOGE3ZWQ4Y2MzYTFhZDI0MTRmZmFjNDMxYWU0ZjI5YjQyMGZiMDYyNGMwOThlZjg3MB6nxQY=: 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg0NmU4NjcyY2Q4NmUyNDMyYjNkOTI2NWM5YjYxNmNMxaZR: 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2ZiYzM2ZGM3Mjk4ZDEzOGE3ZWQ4Y2MzYTFhZDI0MTRmZmFjNDMxYWU0ZjI5YjQyMGZiMDYyNGMwOThlZjg3MB6nxQY=: ]] 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2ZiYzM2ZGM3Mjk4ZDEzOGE3ZWQ4Y2MzYTFhZDI0MTRmZmFjNDMxYWU0ZjI5YjQyMGZiMDYyNGMwOThlZjg3MB6nxQY=: 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:44.640 nvme0n1 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmUyZGQ2MWQwN2I0ZTA4YjI1MTBlYzgwMDEwNjE4Njk0YjgxMjIxNTQ2ZDdmNGVkEUy1Mw==: 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGNiZDU3NmQwOWEyNTY1YzQzODllMTEwOWRlODRmZDZkZDk1ZDExMDQzMzk4NDMxPNntjA==: 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmUyZGQ2MWQwN2I0ZTA4YjI1MTBlYzgwMDEwNjE4Njk0YjgxMjIxNTQ2ZDdmNGVkEUy1Mw==: 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGNiZDU3NmQwOWEyNTY1YzQzODllMTEwOWRlODRmZDZkZDk1ZDExMDQzMzk4NDMxPNntjA==: ]] 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGNiZDU3NmQwOWEyNTY1YzQzODllMTEwOWRlODRmZDZkZDk1ZDExMDQzMzk4NDMxPNntjA==: 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:44.640 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:44.900 nvme0n1 00:50:44.900 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:44.900 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:44.900 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:44.900 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:44.900 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:44.900 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:44.900 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:44.900 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:44.900 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:44.900 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:44.900 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:44.900 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:44.900 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:50:44.900 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:44.900 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:50:44.900 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:50:44.900 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:50:44.900 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWYzYzg1NWRhYTJhY2U1YmU1N2IwNTY3YmRkN2RkNzFJkkB2: 00:50:44.900 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQ2YzFjZWU0ZmJkMzY1MjQxZTQ0ZWE3YWM5MjBlMDYXUHKG: 00:50:44.900 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:50:44.900 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:50:44.900 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWYzYzg1NWRhYTJhY2U1YmU1N2IwNTY3YmRkN2RkNzFJkkB2: 00:50:44.900 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQ2YzFjZWU0ZmJkMzY1MjQxZTQ0ZWE3YWM5MjBlMDYXUHKG: ]] 00:50:44.900 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQ2YzFjZWU0ZmJkMzY1MjQxZTQ0ZWE3YWM5MjBlMDYXUHKG: 00:50:44.900 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:50:44.900 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:44.900 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:50:44.900 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:50:44.900 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:50:44.900 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:44.900 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:50:44.900 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:44.900 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:44.900 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:44.900 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:44.900 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:44.900 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:44.900 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:44.900 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:44.900 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:44.900 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:44.901 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:44.901 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:44.901 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:44.901 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:44.901 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:44.901 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:44.901 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:44.901 nvme0n1 00:50:44.901 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:45.160 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:45.160 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:45.160 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:45.160 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:45.160 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:45.160 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:45.160 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:45.160 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:45.160 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:45.160 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:45.160 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:45.160 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:50:45.160 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:45.160 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:50:45.160 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:50:45.160 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:50:45.160 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDI2MTM3YjE2NTM2N2U0MGZlNTJjNDIwYjJkMGRhMTNmYTE3Nzc0NDI3ZjQ5OWEw15V7Ew==: 00:50:45.160 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDMwOTMzMmQ2ZGE2NDQ5MjFjN2FlNjEyYWQ2OGVlNDaVKlOw: 00:50:45.160 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:50:45.160 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:50:45.160 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDI2MTM3YjE2NTM2N2U0MGZlNTJjNDIwYjJkMGRhMTNmYTE3Nzc0NDI3ZjQ5OWEw15V7Ew==: 00:50:45.160 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDMwOTMzMmQ2ZGE2NDQ5MjFjN2FlNjEyYWQ2OGVlNDaVKlOw: ]] 00:50:45.160 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDMwOTMzMmQ2ZGE2NDQ5MjFjN2FlNjEyYWQ2OGVlNDaVKlOw: 00:50:45.160 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:50:45.160 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:45.160 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:50:45.160 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:50:45.160 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:50:45.160 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:45.160 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:50:45.160 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:45.160 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:45.160 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:45.160 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:45.160 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:45.160 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:45.160 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:45.160 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:45.160 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:45.160 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:45.160 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:45.160 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:45.160 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:45.160 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:45.160 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:50:45.160 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:45.160 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:45.160 nvme0n1 00:50:45.160 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:45.160 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:45.160 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:45.160 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:45.161 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:45.161 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:45.161 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:45.161 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:45.161 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:45.161 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:45.161 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:45.161 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:45.161 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:50:45.161 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:45.161 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:50:45.161 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:50:45.161 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:50:45.161 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzA0ODQ2Mjg2MWNmNzM4ZDM4OWY4ODc1Yzg5M2U3NmI4MzFhYWJjMzNmZDJlMTM3ZWFkYWYxNTU3Yzg5NzE3MWUIids=: 00:50:45.161 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:50:45.161 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:50:45.161 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:50:45.161 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzA0ODQ2Mjg2MWNmNzM4ZDM4OWY4ODc1Yzg5M2U3NmI4MzFhYWJjMzNmZDJlMTM3ZWFkYWYxNTU3Yzg5NzE3MWUIids=: 00:50:45.161 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:50:45.161 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:50:45.161 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:45.161 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:50:45.161 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:50:45.161 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:50:45.161 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:45.161 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:50:45.161 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:45.161 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:45.421 nvme0n1 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg0NmU4NjcyY2Q4NmUyNDMyYjNkOTI2NWM5YjYxNmNMxaZR: 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2ZiYzM2ZGM3Mjk4ZDEzOGE3ZWQ4Y2MzYTFhZDI0MTRmZmFjNDMxYWU0ZjI5YjQyMGZiMDYyNGMwOThlZjg3MB6nxQY=: 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg0NmU4NjcyY2Q4NmUyNDMyYjNkOTI2NWM5YjYxNmNMxaZR: 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2ZiYzM2ZGM3Mjk4ZDEzOGE3ZWQ4Y2MzYTFhZDI0MTRmZmFjNDMxYWU0ZjI5YjQyMGZiMDYyNGMwOThlZjg3MB6nxQY=: ]] 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2ZiYzM2ZGM3Mjk4ZDEzOGE3ZWQ4Y2MzYTFhZDI0MTRmZmFjNDMxYWU0ZjI5YjQyMGZiMDYyNGMwOThlZjg3MB6nxQY=: 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:45.421 15:04:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:45.681 nvme0n1 00:50:45.681 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:45.681 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:45.681 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:45.681 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:45.681 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:45.681 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:45.681 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:45.681 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:45.681 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:45.681 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:45.681 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:45.681 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:45.681 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:50:45.681 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:45.681 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:50:45.681 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:50:45.681 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:50:45.681 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmUyZGQ2MWQwN2I0ZTA4YjI1MTBlYzgwMDEwNjE4Njk0YjgxMjIxNTQ2ZDdmNGVkEUy1Mw==: 00:50:45.682 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGNiZDU3NmQwOWEyNTY1YzQzODllMTEwOWRlODRmZDZkZDk1ZDExMDQzMzk4NDMxPNntjA==: 00:50:45.682 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:50:45.682 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:50:45.682 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmUyZGQ2MWQwN2I0ZTA4YjI1MTBlYzgwMDEwNjE4Njk0YjgxMjIxNTQ2ZDdmNGVkEUy1Mw==: 00:50:45.682 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGNiZDU3NmQwOWEyNTY1YzQzODllMTEwOWRlODRmZDZkZDk1ZDExMDQzMzk4NDMxPNntjA==: ]] 00:50:45.682 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGNiZDU3NmQwOWEyNTY1YzQzODllMTEwOWRlODRmZDZkZDk1ZDExMDQzMzk4NDMxPNntjA==: 00:50:45.682 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:50:45.682 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:45.682 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:50:45.682 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:50:45.682 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:50:45.682 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:45.682 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:50:45.682 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:45.682 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:45.682 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:45.682 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:45.682 15:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:45.682 15:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:45.682 15:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:45.682 15:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:45.682 15:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:45.682 15:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:45.682 15:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:45.682 15:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:45.682 15:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:45.682 15:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:45.682 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:45.682 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:45.682 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:45.682 nvme0n1 00:50:45.682 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:45.682 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:45.682 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:45.682 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:45.682 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:45.682 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:45.942 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:45.942 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:45.942 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:45.942 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:45.942 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:45.942 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:45.942 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:50:45.942 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:45.942 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:50:45.942 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:50:45.942 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:50:45.942 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWYzYzg1NWRhYTJhY2U1YmU1N2IwNTY3YmRkN2RkNzFJkkB2: 00:50:45.942 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQ2YzFjZWU0ZmJkMzY1MjQxZTQ0ZWE3YWM5MjBlMDYXUHKG: 00:50:45.942 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:50:45.942 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:50:45.942 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWYzYzg1NWRhYTJhY2U1YmU1N2IwNTY3YmRkN2RkNzFJkkB2: 00:50:45.942 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQ2YzFjZWU0ZmJkMzY1MjQxZTQ0ZWE3YWM5MjBlMDYXUHKG: ]] 00:50:45.942 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQ2YzFjZWU0ZmJkMzY1MjQxZTQ0ZWE3YWM5MjBlMDYXUHKG: 00:50:45.942 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:50:45.942 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:45.942 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:50:45.942 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:50:45.942 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:50:45.942 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:45.942 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:50:45.942 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:45.942 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:45.942 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:45.942 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:45.942 15:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:45.942 15:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:45.942 15:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:45.942 15:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:45.942 15:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:45.942 15:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:45.942 15:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:45.942 15:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:45.942 15:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:45.942 15:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:45.943 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:45.943 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:45.943 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:45.943 nvme0n1 00:50:45.943 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:45.943 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:45.943 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:45.943 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:45.943 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:45.943 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:45.943 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:45.943 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:45.943 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:45.943 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:45.943 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:45.943 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:45.943 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:50:45.943 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:45.943 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:50:45.943 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:50:45.943 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:50:45.943 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDI2MTM3YjE2NTM2N2U0MGZlNTJjNDIwYjJkMGRhMTNmYTE3Nzc0NDI3ZjQ5OWEw15V7Ew==: 00:50:45.943 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDMwOTMzMmQ2ZGE2NDQ5MjFjN2FlNjEyYWQ2OGVlNDaVKlOw: 00:50:45.943 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:50:45.943 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:50:45.943 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDI2MTM3YjE2NTM2N2U0MGZlNTJjNDIwYjJkMGRhMTNmYTE3Nzc0NDI3ZjQ5OWEw15V7Ew==: 00:50:45.943 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDMwOTMzMmQ2ZGE2NDQ5MjFjN2FlNjEyYWQ2OGVlNDaVKlOw: ]] 00:50:45.943 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDMwOTMzMmQ2ZGE2NDQ5MjFjN2FlNjEyYWQ2OGVlNDaVKlOw: 00:50:45.943 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:50:45.943 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:45.943 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:50:45.943 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:50:45.943 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:50:45.943 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:45.943 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:50:45.943 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:45.943 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:45.943 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:45.943 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:45.943 15:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:45.943 15:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:45.943 15:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:45.943 15:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:45.943 15:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:45.943 15:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:45.943 15:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:45.943 15:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:45.943 15:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:45.943 15:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:45.943 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:50:45.943 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:45.943 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:46.203 nvme0n1 00:50:46.203 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:46.203 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:46.203 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:46.203 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:46.203 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:46.203 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:46.203 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:46.203 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:46.203 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:46.203 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:46.203 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:46.203 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:46.203 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:50:46.203 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:46.203 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:50:46.203 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:50:46.203 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:50:46.203 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzA0ODQ2Mjg2MWNmNzM4ZDM4OWY4ODc1Yzg5M2U3NmI4MzFhYWJjMzNmZDJlMTM3ZWFkYWYxNTU3Yzg5NzE3MWUIids=: 00:50:46.203 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:50:46.203 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:50:46.203 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:50:46.203 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzA0ODQ2Mjg2MWNmNzM4ZDM4OWY4ODc1Yzg5M2U3NmI4MzFhYWJjMzNmZDJlMTM3ZWFkYWYxNTU3Yzg5NzE3MWUIids=: 00:50:46.203 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:50:46.203 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:50:46.203 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:46.203 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:50:46.203 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:50:46.203 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:50:46.203 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:46.203 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:50:46.203 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:46.203 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:46.203 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:46.203 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:46.203 15:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:46.203 15:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:46.203 15:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:46.203 15:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:46.203 15:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:46.203 15:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:46.203 15:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:46.203 15:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:46.203 15:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:46.203 15:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:46.203 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:50:46.203 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:46.203 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:46.464 nvme0n1 00:50:46.464 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:46.464 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:46.464 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:46.464 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:46.464 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:46.464 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:46.464 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:46.464 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:46.464 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:46.464 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:46.464 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:46.464 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:50:46.464 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:46.464 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:50:46.464 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:46.464 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:50:46.464 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:50:46.464 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:50:46.465 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg0NmU4NjcyY2Q4NmUyNDMyYjNkOTI2NWM5YjYxNmNMxaZR: 00:50:46.465 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2ZiYzM2ZGM3Mjk4ZDEzOGE3ZWQ4Y2MzYTFhZDI0MTRmZmFjNDMxYWU0ZjI5YjQyMGZiMDYyNGMwOThlZjg3MB6nxQY=: 00:50:46.465 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:50:46.465 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:50:46.465 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg0NmU4NjcyY2Q4NmUyNDMyYjNkOTI2NWM5YjYxNmNMxaZR: 00:50:46.465 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2ZiYzM2ZGM3Mjk4ZDEzOGE3ZWQ4Y2MzYTFhZDI0MTRmZmFjNDMxYWU0ZjI5YjQyMGZiMDYyNGMwOThlZjg3MB6nxQY=: ]] 00:50:46.465 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2ZiYzM2ZGM3Mjk4ZDEzOGE3ZWQ4Y2MzYTFhZDI0MTRmZmFjNDMxYWU0ZjI5YjQyMGZiMDYyNGMwOThlZjg3MB6nxQY=: 00:50:46.465 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:50:46.465 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:46.465 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:50:46.465 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:50:46.465 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:50:46.465 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:46.465 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:50:46.465 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:46.465 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:46.465 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:46.465 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:46.465 15:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:46.465 15:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:46.465 15:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:46.465 15:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:46.465 15:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:46.465 15:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:46.465 15:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:46.465 15:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:46.465 15:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:46.465 15:04:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:46.465 15:04:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:46.465 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:46.465 15:04:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:46.724 nvme0n1 00:50:46.724 15:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:46.724 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:46.724 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:46.724 15:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:46.724 15:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:46.724 15:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:46.724 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:46.724 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:46.724 15:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:46.724 15:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:46.724 15:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:46.724 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:46.724 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:50:46.724 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:46.724 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:50:46.724 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:50:46.724 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:50:46.724 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmUyZGQ2MWQwN2I0ZTA4YjI1MTBlYzgwMDEwNjE4Njk0YjgxMjIxNTQ2ZDdmNGVkEUy1Mw==: 00:50:46.724 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGNiZDU3NmQwOWEyNTY1YzQzODllMTEwOWRlODRmZDZkZDk1ZDExMDQzMzk4NDMxPNntjA==: 00:50:46.724 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:50:46.724 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:50:46.724 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmUyZGQ2MWQwN2I0ZTA4YjI1MTBlYzgwMDEwNjE4Njk0YjgxMjIxNTQ2ZDdmNGVkEUy1Mw==: 00:50:46.724 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGNiZDU3NmQwOWEyNTY1YzQzODllMTEwOWRlODRmZDZkZDk1ZDExMDQzMzk4NDMxPNntjA==: ]] 00:50:46.724 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGNiZDU3NmQwOWEyNTY1YzQzODllMTEwOWRlODRmZDZkZDk1ZDExMDQzMzk4NDMxPNntjA==: 00:50:46.724 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:50:46.724 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:46.724 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:50:46.724 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:50:46.724 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:50:46.724 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:46.724 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:50:46.724 15:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:46.724 15:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:46.724 15:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:46.724 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:46.724 15:04:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:46.724 15:04:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:46.724 15:04:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:46.725 15:04:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:46.725 15:04:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:46.725 15:04:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:46.725 15:04:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:46.725 15:04:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:46.725 15:04:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:46.725 15:04:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:46.725 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:46.725 15:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:46.725 15:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:46.984 nvme0n1 00:50:46.984 15:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:46.984 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:46.984 15:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:46.984 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:46.984 15:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:46.984 15:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:46.984 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:46.984 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:46.984 15:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:46.984 15:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:46.984 15:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:46.984 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:46.985 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:50:46.985 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:46.985 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:50:46.985 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:50:46.985 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:50:46.985 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWYzYzg1NWRhYTJhY2U1YmU1N2IwNTY3YmRkN2RkNzFJkkB2: 00:50:46.985 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQ2YzFjZWU0ZmJkMzY1MjQxZTQ0ZWE3YWM5MjBlMDYXUHKG: 00:50:46.985 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:50:46.985 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:50:46.985 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWYzYzg1NWRhYTJhY2U1YmU1N2IwNTY3YmRkN2RkNzFJkkB2: 00:50:46.985 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQ2YzFjZWU0ZmJkMzY1MjQxZTQ0ZWE3YWM5MjBlMDYXUHKG: ]] 00:50:46.985 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQ2YzFjZWU0ZmJkMzY1MjQxZTQ0ZWE3YWM5MjBlMDYXUHKG: 00:50:46.985 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:50:46.985 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:46.985 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:50:46.985 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:50:46.985 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:50:46.985 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:46.985 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:50:46.985 15:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:46.985 15:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:46.985 15:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:46.985 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:46.985 15:04:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:46.985 15:04:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:46.985 15:04:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:46.985 15:04:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:46.985 15:04:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:46.985 15:04:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:46.985 15:04:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:46.985 15:04:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:46.985 15:04:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:46.985 15:04:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:46.985 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:46.985 15:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:46.985 15:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:47.245 nvme0n1 00:50:47.245 15:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:47.245 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:47.245 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:47.245 15:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:47.245 15:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:47.245 15:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:47.245 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:47.245 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:47.245 15:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:47.245 15:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:47.245 15:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:47.245 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:47.245 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:50:47.245 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:47.245 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:50:47.245 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:50:47.245 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:50:47.245 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDI2MTM3YjE2NTM2N2U0MGZlNTJjNDIwYjJkMGRhMTNmYTE3Nzc0NDI3ZjQ5OWEw15V7Ew==: 00:50:47.245 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDMwOTMzMmQ2ZGE2NDQ5MjFjN2FlNjEyYWQ2OGVlNDaVKlOw: 00:50:47.245 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:50:47.245 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:50:47.245 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDI2MTM3YjE2NTM2N2U0MGZlNTJjNDIwYjJkMGRhMTNmYTE3Nzc0NDI3ZjQ5OWEw15V7Ew==: 00:50:47.245 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDMwOTMzMmQ2ZGE2NDQ5MjFjN2FlNjEyYWQ2OGVlNDaVKlOw: ]] 00:50:47.245 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDMwOTMzMmQ2ZGE2NDQ5MjFjN2FlNjEyYWQ2OGVlNDaVKlOw: 00:50:47.245 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:50:47.245 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:47.245 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:50:47.245 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:50:47.245 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:50:47.245 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:47.245 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:50:47.245 15:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:47.245 15:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:47.245 15:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:47.245 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:47.245 15:04:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:47.245 15:04:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:47.245 15:04:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:47.245 15:04:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:47.245 15:04:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:47.245 15:04:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:47.245 15:04:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:47.245 15:04:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:47.245 15:04:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:47.245 15:04:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:47.245 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:50:47.245 15:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:47.245 15:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:47.505 nvme0n1 00:50:47.505 15:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:47.505 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:47.505 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:47.505 15:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:47.505 15:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:47.505 15:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:47.505 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:47.505 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:47.505 15:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:47.505 15:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:47.505 15:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:47.505 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:47.505 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:50:47.505 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:47.505 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:50:47.505 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:50:47.505 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:50:47.505 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzA0ODQ2Mjg2MWNmNzM4ZDM4OWY4ODc1Yzg5M2U3NmI4MzFhYWJjMzNmZDJlMTM3ZWFkYWYxNTU3Yzg5NzE3MWUIids=: 00:50:47.505 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:50:47.505 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:50:47.505 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:50:47.505 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzA0ODQ2Mjg2MWNmNzM4ZDM4OWY4ODc1Yzg5M2U3NmI4MzFhYWJjMzNmZDJlMTM3ZWFkYWYxNTU3Yzg5NzE3MWUIids=: 00:50:47.505 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:50:47.505 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:50:47.505 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:47.505 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:50:47.505 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:50:47.505 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:50:47.505 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:47.505 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:50:47.505 15:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:47.505 15:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:47.505 15:04:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:47.505 15:04:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:47.505 15:04:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:47.505 15:04:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:47.505 15:04:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:47.505 15:04:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:47.505 15:04:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:47.505 15:04:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:47.505 15:04:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:47.505 15:04:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:47.505 15:04:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:47.505 15:04:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:47.505 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:50:47.505 15:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:47.505 15:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:47.765 nvme0n1 00:50:47.765 15:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:47.765 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:47.765 15:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:47.765 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:47.765 15:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:47.766 15:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:47.766 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:47.766 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:47.766 15:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:47.766 15:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:47.766 15:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:47.766 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:50:47.766 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:47.766 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:50:47.766 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:47.766 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:50:47.766 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:50:47.766 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:50:47.766 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg0NmU4NjcyY2Q4NmUyNDMyYjNkOTI2NWM5YjYxNmNMxaZR: 00:50:47.766 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2ZiYzM2ZGM3Mjk4ZDEzOGE3ZWQ4Y2MzYTFhZDI0MTRmZmFjNDMxYWU0ZjI5YjQyMGZiMDYyNGMwOThlZjg3MB6nxQY=: 00:50:47.766 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:50:47.766 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:50:47.766 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg0NmU4NjcyY2Q4NmUyNDMyYjNkOTI2NWM5YjYxNmNMxaZR: 00:50:47.766 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2ZiYzM2ZGM3Mjk4ZDEzOGE3ZWQ4Y2MzYTFhZDI0MTRmZmFjNDMxYWU0ZjI5YjQyMGZiMDYyNGMwOThlZjg3MB6nxQY=: ]] 00:50:47.766 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2ZiYzM2ZGM3Mjk4ZDEzOGE3ZWQ4Y2MzYTFhZDI0MTRmZmFjNDMxYWU0ZjI5YjQyMGZiMDYyNGMwOThlZjg3MB6nxQY=: 00:50:47.766 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:50:47.766 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:47.766 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:50:47.766 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:50:47.766 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:50:47.766 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:47.766 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:50:47.766 15:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:47.766 15:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:47.766 15:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:47.766 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:47.766 15:04:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:47.766 15:04:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:47.766 15:04:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:47.766 15:04:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:47.766 15:04:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:47.766 15:04:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:47.766 15:04:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:47.766 15:04:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:47.766 15:04:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:47.766 15:04:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:47.766 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:47.766 15:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:47.766 15:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:48.025 nvme0n1 00:50:48.025 15:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:48.025 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:48.025 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:48.025 15:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:48.025 15:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:48.025 15:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:48.026 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:48.026 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:48.026 15:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:48.026 15:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:48.026 15:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:48.026 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:48.026 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:50:48.026 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:48.026 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:50:48.026 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:50:48.026 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:50:48.026 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmUyZGQ2MWQwN2I0ZTA4YjI1MTBlYzgwMDEwNjE4Njk0YjgxMjIxNTQ2ZDdmNGVkEUy1Mw==: 00:50:48.026 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGNiZDU3NmQwOWEyNTY1YzQzODllMTEwOWRlODRmZDZkZDk1ZDExMDQzMzk4NDMxPNntjA==: 00:50:48.026 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:50:48.026 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:50:48.026 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmUyZGQ2MWQwN2I0ZTA4YjI1MTBlYzgwMDEwNjE4Njk0YjgxMjIxNTQ2ZDdmNGVkEUy1Mw==: 00:50:48.026 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGNiZDU3NmQwOWEyNTY1YzQzODllMTEwOWRlODRmZDZkZDk1ZDExMDQzMzk4NDMxPNntjA==: ]] 00:50:48.026 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGNiZDU3NmQwOWEyNTY1YzQzODllMTEwOWRlODRmZDZkZDk1ZDExMDQzMzk4NDMxPNntjA==: 00:50:48.026 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:50:48.026 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:48.026 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:50:48.026 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:50:48.026 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:50:48.026 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:48.026 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:50:48.026 15:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:48.026 15:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:48.026 15:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:48.026 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:48.026 15:04:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:48.026 15:04:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:48.026 15:04:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:48.026 15:04:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:48.026 15:04:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:48.026 15:04:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:48.026 15:04:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:48.026 15:04:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:48.026 15:04:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:48.026 15:04:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:48.026 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:48.026 15:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:48.026 15:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:48.615 nvme0n1 00:50:48.615 15:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:48.615 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:48.615 15:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:48.615 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:48.615 15:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:48.615 15:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:48.615 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:48.615 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:48.615 15:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:48.615 15:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:48.615 15:04:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:48.615 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:48.615 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:50:48.615 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:48.615 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:50:48.615 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:50:48.615 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:50:48.615 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWYzYzg1NWRhYTJhY2U1YmU1N2IwNTY3YmRkN2RkNzFJkkB2: 00:50:48.615 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQ2YzFjZWU0ZmJkMzY1MjQxZTQ0ZWE3YWM5MjBlMDYXUHKG: 00:50:48.615 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:50:48.615 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:50:48.615 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWYzYzg1NWRhYTJhY2U1YmU1N2IwNTY3YmRkN2RkNzFJkkB2: 00:50:48.615 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQ2YzFjZWU0ZmJkMzY1MjQxZTQ0ZWE3YWM5MjBlMDYXUHKG: ]] 00:50:48.615 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQ2YzFjZWU0ZmJkMzY1MjQxZTQ0ZWE3YWM5MjBlMDYXUHKG: 00:50:48.615 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:50:48.615 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:48.615 15:04:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:50:48.615 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:50:48.615 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:50:48.615 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:48.615 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:50:48.615 15:04:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:48.615 15:04:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:48.615 15:04:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:48.615 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:48.615 15:04:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:48.615 15:04:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:48.615 15:04:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:48.615 15:04:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:48.615 15:04:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:48.615 15:04:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:48.615 15:04:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:48.615 15:04:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:48.615 15:04:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:48.615 15:04:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:48.615 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:48.615 15:04:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:48.615 15:04:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:48.876 nvme0n1 00:50:48.876 15:04:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:48.876 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:48.876 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:48.876 15:04:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:48.876 15:04:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:48.876 15:04:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:48.876 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:48.876 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:48.876 15:04:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:48.876 15:04:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:48.876 15:04:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:48.876 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:48.876 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:50:48.876 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:48.876 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:50:48.876 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:50:48.876 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:50:48.876 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDI2MTM3YjE2NTM2N2U0MGZlNTJjNDIwYjJkMGRhMTNmYTE3Nzc0NDI3ZjQ5OWEw15V7Ew==: 00:50:48.876 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDMwOTMzMmQ2ZGE2NDQ5MjFjN2FlNjEyYWQ2OGVlNDaVKlOw: 00:50:48.876 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:50:48.876 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:50:48.876 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDI2MTM3YjE2NTM2N2U0MGZlNTJjNDIwYjJkMGRhMTNmYTE3Nzc0NDI3ZjQ5OWEw15V7Ew==: 00:50:48.876 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDMwOTMzMmQ2ZGE2NDQ5MjFjN2FlNjEyYWQ2OGVlNDaVKlOw: ]] 00:50:48.876 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDMwOTMzMmQ2ZGE2NDQ5MjFjN2FlNjEyYWQ2OGVlNDaVKlOw: 00:50:48.876 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:50:48.876 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:48.876 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:50:48.876 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:50:48.876 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:50:48.876 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:48.876 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:50:48.876 15:04:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:48.876 15:04:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:48.876 15:04:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:48.876 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:48.876 15:04:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:48.876 15:04:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:48.876 15:04:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:48.876 15:04:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:48.876 15:04:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:48.876 15:04:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:48.876 15:04:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:48.876 15:04:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:48.876 15:04:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:48.876 15:04:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:48.876 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:50:48.876 15:04:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:48.876 15:04:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:49.136 nvme0n1 00:50:49.136 15:04:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:49.136 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:49.136 15:04:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:49.136 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:49.136 15:04:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:49.136 15:04:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:49.136 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:49.136 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:49.136 15:04:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:49.136 15:04:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:49.136 15:04:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:49.136 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:49.136 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:50:49.136 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:49.136 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:50:49.136 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:50:49.136 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:50:49.136 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzA0ODQ2Mjg2MWNmNzM4ZDM4OWY4ODc1Yzg5M2U3NmI4MzFhYWJjMzNmZDJlMTM3ZWFkYWYxNTU3Yzg5NzE3MWUIids=: 00:50:49.136 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:50:49.136 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:50:49.136 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:50:49.136 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzA0ODQ2Mjg2MWNmNzM4ZDM4OWY4ODc1Yzg5M2U3NmI4MzFhYWJjMzNmZDJlMTM3ZWFkYWYxNTU3Yzg5NzE3MWUIids=: 00:50:49.136 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:50:49.136 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:50:49.136 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:49.136 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:50:49.136 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:50:49.136 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:50:49.136 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:49.136 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:50:49.136 15:04:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:49.136 15:04:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:49.136 15:04:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:49.396 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:49.396 15:04:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:49.396 15:04:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:49.396 15:04:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:49.396 15:04:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:49.396 15:04:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:49.396 15:04:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:49.396 15:04:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:49.396 15:04:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:49.396 15:04:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:49.396 15:04:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:49.396 15:04:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:50:49.396 15:04:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:49.396 15:04:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:49.655 nvme0n1 00:50:49.655 15:04:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:49.655 15:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:49.655 15:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:49.655 15:04:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:49.655 15:04:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:49.655 15:04:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:49.655 15:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:49.655 15:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:49.655 15:04:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:49.655 15:04:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:49.655 15:04:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:49.656 15:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:50:49.656 15:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:49.656 15:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:50:49.656 15:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:49.656 15:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:50:49.656 15:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:50:49.656 15:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:50:49.656 15:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg0NmU4NjcyY2Q4NmUyNDMyYjNkOTI2NWM5YjYxNmNMxaZR: 00:50:49.656 15:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2ZiYzM2ZGM3Mjk4ZDEzOGE3ZWQ4Y2MzYTFhZDI0MTRmZmFjNDMxYWU0ZjI5YjQyMGZiMDYyNGMwOThlZjg3MB6nxQY=: 00:50:49.656 15:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:50:49.656 15:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:50:49.656 15:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg0NmU4NjcyY2Q4NmUyNDMyYjNkOTI2NWM5YjYxNmNMxaZR: 00:50:49.656 15:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2ZiYzM2ZGM3Mjk4ZDEzOGE3ZWQ4Y2MzYTFhZDI0MTRmZmFjNDMxYWU0ZjI5YjQyMGZiMDYyNGMwOThlZjg3MB6nxQY=: ]] 00:50:49.656 15:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2ZiYzM2ZGM3Mjk4ZDEzOGE3ZWQ4Y2MzYTFhZDI0MTRmZmFjNDMxYWU0ZjI5YjQyMGZiMDYyNGMwOThlZjg3MB6nxQY=: 00:50:49.656 15:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:50:49.656 15:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:49.656 15:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:50:49.656 15:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:50:49.656 15:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:50:49.656 15:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:49.656 15:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:50:49.656 15:04:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:49.656 15:04:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:49.656 15:04:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:49.656 15:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:49.656 15:04:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:49.656 15:04:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:49.656 15:04:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:49.656 15:04:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:49.656 15:04:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:49.656 15:04:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:49.656 15:04:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:49.656 15:04:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:49.656 15:04:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:49.656 15:04:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:49.656 15:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:49.656 15:04:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:49.656 15:04:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:50.224 nvme0n1 00:50:50.224 15:04:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:50.224 15:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:50.224 15:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:50.224 15:04:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:50.224 15:04:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:50.224 15:04:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:50.224 15:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:50.224 15:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:50.224 15:04:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:50.224 15:04:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:50.224 15:04:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:50.224 15:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:50.224 15:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:50:50.224 15:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:50.224 15:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:50:50.224 15:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:50:50.224 15:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:50:50.224 15:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmUyZGQ2MWQwN2I0ZTA4YjI1MTBlYzgwMDEwNjE4Njk0YjgxMjIxNTQ2ZDdmNGVkEUy1Mw==: 00:50:50.224 15:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGNiZDU3NmQwOWEyNTY1YzQzODllMTEwOWRlODRmZDZkZDk1ZDExMDQzMzk4NDMxPNntjA==: 00:50:50.224 15:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:50:50.224 15:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:50:50.224 15:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmUyZGQ2MWQwN2I0ZTA4YjI1MTBlYzgwMDEwNjE4Njk0YjgxMjIxNTQ2ZDdmNGVkEUy1Mw==: 00:50:50.224 15:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGNiZDU3NmQwOWEyNTY1YzQzODllMTEwOWRlODRmZDZkZDk1ZDExMDQzMzk4NDMxPNntjA==: ]] 00:50:50.224 15:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGNiZDU3NmQwOWEyNTY1YzQzODllMTEwOWRlODRmZDZkZDk1ZDExMDQzMzk4NDMxPNntjA==: 00:50:50.224 15:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:50:50.224 15:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:50.224 15:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:50:50.224 15:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:50:50.224 15:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:50:50.224 15:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:50.224 15:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:50:50.224 15:04:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:50.224 15:04:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:50.224 15:04:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:50.224 15:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:50.224 15:04:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:50.224 15:04:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:50.224 15:04:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:50.224 15:04:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:50.224 15:04:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:50.224 15:04:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:50.224 15:04:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:50.224 15:04:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:50.224 15:04:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:50.224 15:04:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:50.224 15:04:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:50.224 15:04:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:50.224 15:04:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:50.793 nvme0n1 00:50:50.793 15:04:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:50.793 15:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:50.793 15:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:50.793 15:04:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:50.793 15:04:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:50.793 15:04:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:50.793 15:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:50.793 15:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:50.793 15:04:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:50.793 15:04:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:50.793 15:04:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:50.793 15:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:50.793 15:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:50:50.793 15:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:50.793 15:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:50:50.793 15:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:50:50.793 15:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:50:50.793 15:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWYzYzg1NWRhYTJhY2U1YmU1N2IwNTY3YmRkN2RkNzFJkkB2: 00:50:50.793 15:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQ2YzFjZWU0ZmJkMzY1MjQxZTQ0ZWE3YWM5MjBlMDYXUHKG: 00:50:50.793 15:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:50:50.793 15:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:50:50.793 15:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWYzYzg1NWRhYTJhY2U1YmU1N2IwNTY3YmRkN2RkNzFJkkB2: 00:50:50.793 15:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQ2YzFjZWU0ZmJkMzY1MjQxZTQ0ZWE3YWM5MjBlMDYXUHKG: ]] 00:50:50.793 15:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQ2YzFjZWU0ZmJkMzY1MjQxZTQ0ZWE3YWM5MjBlMDYXUHKG: 00:50:50.793 15:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:50:50.793 15:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:50.793 15:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:50:50.793 15:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:50:50.793 15:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:50:50.793 15:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:50.793 15:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:50:50.793 15:04:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:50.793 15:04:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:50.793 15:04:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:50.793 15:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:50.793 15:04:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:50.793 15:04:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:50.793 15:04:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:50.793 15:04:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:50.793 15:04:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:50.793 15:04:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:50.793 15:04:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:50.793 15:04:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:50.793 15:04:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:50.793 15:04:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:50.793 15:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:50.793 15:04:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:50.793 15:04:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:51.362 nvme0n1 00:50:51.362 15:04:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:51.362 15:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:51.362 15:04:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:51.362 15:04:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:51.362 15:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:51.362 15:04:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:51.362 15:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:51.362 15:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:51.362 15:04:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:51.363 15:04:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:51.363 15:04:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:51.363 15:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:51.363 15:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:50:51.363 15:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:51.363 15:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:50:51.363 15:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:50:51.363 15:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:50:51.363 15:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDI2MTM3YjE2NTM2N2U0MGZlNTJjNDIwYjJkMGRhMTNmYTE3Nzc0NDI3ZjQ5OWEw15V7Ew==: 00:50:51.363 15:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDMwOTMzMmQ2ZGE2NDQ5MjFjN2FlNjEyYWQ2OGVlNDaVKlOw: 00:50:51.363 15:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:50:51.363 15:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:50:51.363 15:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDI2MTM3YjE2NTM2N2U0MGZlNTJjNDIwYjJkMGRhMTNmYTE3Nzc0NDI3ZjQ5OWEw15V7Ew==: 00:50:51.363 15:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDMwOTMzMmQ2ZGE2NDQ5MjFjN2FlNjEyYWQ2OGVlNDaVKlOw: ]] 00:50:51.363 15:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDMwOTMzMmQ2ZGE2NDQ5MjFjN2FlNjEyYWQ2OGVlNDaVKlOw: 00:50:51.363 15:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:50:51.363 15:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:51.363 15:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:50:51.363 15:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:50:51.363 15:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:50:51.363 15:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:51.363 15:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:50:51.363 15:04:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:51.363 15:04:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:51.363 15:04:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:51.363 15:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:51.363 15:04:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:51.363 15:04:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:51.363 15:04:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:51.363 15:04:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:51.363 15:04:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:51.363 15:04:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:51.363 15:04:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:51.363 15:04:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:51.363 15:04:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:51.363 15:04:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:51.363 15:04:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:50:51.363 15:04:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:51.363 15:04:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:51.931 nvme0n1 00:50:51.931 15:04:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:51.931 15:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:51.931 15:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:51.931 15:04:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:51.931 15:04:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:51.931 15:04:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:51.931 15:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:51.931 15:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:51.931 15:04:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:51.931 15:04:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:51.931 15:04:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:51.931 15:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:51.931 15:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:50:51.931 15:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:51.931 15:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:50:51.931 15:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:50:51.931 15:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:50:51.931 15:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzA0ODQ2Mjg2MWNmNzM4ZDM4OWY4ODc1Yzg5M2U3NmI4MzFhYWJjMzNmZDJlMTM3ZWFkYWYxNTU3Yzg5NzE3MWUIids=: 00:50:51.931 15:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:50:51.931 15:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:50:51.931 15:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:50:51.931 15:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzA0ODQ2Mjg2MWNmNzM4ZDM4OWY4ODc1Yzg5M2U3NmI4MzFhYWJjMzNmZDJlMTM3ZWFkYWYxNTU3Yzg5NzE3MWUIids=: 00:50:51.931 15:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:50:51.931 15:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:50:51.931 15:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:51.931 15:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:50:51.931 15:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:50:51.931 15:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:50:51.931 15:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:51.931 15:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:50:51.931 15:04:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:51.931 15:04:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:51.931 15:04:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:51.931 15:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:51.931 15:04:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:51.931 15:04:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:51.931 15:04:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:51.931 15:04:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:51.931 15:04:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:51.931 15:04:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:51.931 15:04:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:51.931 15:04:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:51.931 15:04:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:51.931 15:04:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:51.932 15:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:50:51.932 15:04:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:51.932 15:04:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:52.497 nvme0n1 00:50:52.497 15:04:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:52.498 15:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:52.498 15:04:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:52.498 15:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:52.498 15:04:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:52.498 15:04:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:52.498 15:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:52.498 15:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:52.498 15:04:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:52.498 15:04:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:52.498 15:04:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:52.498 15:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:50:52.498 15:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:50:52.498 15:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:52.498 15:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:50:52.498 15:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:52.498 15:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:50:52.498 15:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:50:52.498 15:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:50:52.498 15:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg0NmU4NjcyY2Q4NmUyNDMyYjNkOTI2NWM5YjYxNmNMxaZR: 00:50:52.498 15:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2ZiYzM2ZGM3Mjk4ZDEzOGE3ZWQ4Y2MzYTFhZDI0MTRmZmFjNDMxYWU0ZjI5YjQyMGZiMDYyNGMwOThlZjg3MB6nxQY=: 00:50:52.498 15:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:50:52.498 15:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:50:52.498 15:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg0NmU4NjcyY2Q4NmUyNDMyYjNkOTI2NWM5YjYxNmNMxaZR: 00:50:52.498 15:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2ZiYzM2ZGM3Mjk4ZDEzOGE3ZWQ4Y2MzYTFhZDI0MTRmZmFjNDMxYWU0ZjI5YjQyMGZiMDYyNGMwOThlZjg3MB6nxQY=: ]] 00:50:52.498 15:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2ZiYzM2ZGM3Mjk4ZDEzOGE3ZWQ4Y2MzYTFhZDI0MTRmZmFjNDMxYWU0ZjI5YjQyMGZiMDYyNGMwOThlZjg3MB6nxQY=: 00:50:52.498 15:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:50:52.498 15:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:52.498 15:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:50:52.498 15:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:50:52.498 15:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:50:52.498 15:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:52.498 15:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:50:52.498 15:04:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:52.498 15:04:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:52.498 15:04:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:52.498 15:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:52.498 15:04:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:52.498 15:04:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:52.498 15:04:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:52.498 15:04:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:52.498 15:04:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:52.498 15:04:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:52.498 15:04:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:52.498 15:04:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:52.498 15:04:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:52.498 15:04:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:52.498 15:04:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:52.498 15:04:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:52.498 15:04:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:52.498 nvme0n1 00:50:52.498 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:52.498 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:52.498 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:52.498 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:52.498 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:52.498 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:52.498 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:52.498 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:52.498 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:52.498 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmUyZGQ2MWQwN2I0ZTA4YjI1MTBlYzgwMDEwNjE4Njk0YjgxMjIxNTQ2ZDdmNGVkEUy1Mw==: 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGNiZDU3NmQwOWEyNTY1YzQzODllMTEwOWRlODRmZDZkZDk1ZDExMDQzMzk4NDMxPNntjA==: 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmUyZGQ2MWQwN2I0ZTA4YjI1MTBlYzgwMDEwNjE4Njk0YjgxMjIxNTQ2ZDdmNGVkEUy1Mw==: 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGNiZDU3NmQwOWEyNTY1YzQzODllMTEwOWRlODRmZDZkZDk1ZDExMDQzMzk4NDMxPNntjA==: ]] 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGNiZDU3NmQwOWEyNTY1YzQzODllMTEwOWRlODRmZDZkZDk1ZDExMDQzMzk4NDMxPNntjA==: 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:52.865 nvme0n1 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWYzYzg1NWRhYTJhY2U1YmU1N2IwNTY3YmRkN2RkNzFJkkB2: 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQ2YzFjZWU0ZmJkMzY1MjQxZTQ0ZWE3YWM5MjBlMDYXUHKG: 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWYzYzg1NWRhYTJhY2U1YmU1N2IwNTY3YmRkN2RkNzFJkkB2: 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQ2YzFjZWU0ZmJkMzY1MjQxZTQ0ZWE3YWM5MjBlMDYXUHKG: ]] 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQ2YzFjZWU0ZmJkMzY1MjQxZTQ0ZWE3YWM5MjBlMDYXUHKG: 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:52.865 nvme0n1 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:50:52.865 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:50:52.866 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDI2MTM3YjE2NTM2N2U0MGZlNTJjNDIwYjJkMGRhMTNmYTE3Nzc0NDI3ZjQ5OWEw15V7Ew==: 00:50:52.866 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDMwOTMzMmQ2ZGE2NDQ5MjFjN2FlNjEyYWQ2OGVlNDaVKlOw: 00:50:52.866 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:50:52.866 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:50:52.866 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDI2MTM3YjE2NTM2N2U0MGZlNTJjNDIwYjJkMGRhMTNmYTE3Nzc0NDI3ZjQ5OWEw15V7Ew==: 00:50:52.866 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDMwOTMzMmQ2ZGE2NDQ5MjFjN2FlNjEyYWQ2OGVlNDaVKlOw: ]] 00:50:52.866 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDMwOTMzMmQ2ZGE2NDQ5MjFjN2FlNjEyYWQ2OGVlNDaVKlOw: 00:50:52.866 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:50:52.866 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:52.866 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:50:52.866 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:50:52.866 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:50:53.124 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:53.124 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:50:53.124 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:53.124 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:53.124 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:53.124 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:53.124 15:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:53.124 15:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:53.124 15:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:53.124 15:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:53.124 15:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:53.124 15:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:53.124 15:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:53.124 15:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:53.124 15:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:53.124 15:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:53.124 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:50:53.124 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:53.124 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:53.124 nvme0n1 00:50:53.124 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:53.124 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:53.124 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:53.124 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:53.124 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:53.124 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:53.124 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:53.124 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:53.124 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:53.124 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:53.124 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:53.124 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:53.124 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:50:53.124 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:53.124 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:50:53.124 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:50:53.124 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:50:53.124 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzA0ODQ2Mjg2MWNmNzM4ZDM4OWY4ODc1Yzg5M2U3NmI4MzFhYWJjMzNmZDJlMTM3ZWFkYWYxNTU3Yzg5NzE3MWUIids=: 00:50:53.124 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:50:53.124 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:50:53.125 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:50:53.125 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzA0ODQ2Mjg2MWNmNzM4ZDM4OWY4ODc1Yzg5M2U3NmI4MzFhYWJjMzNmZDJlMTM3ZWFkYWYxNTU3Yzg5NzE3MWUIids=: 00:50:53.125 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:50:53.125 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:50:53.125 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:53.125 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:50:53.125 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:50:53.125 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:50:53.125 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:53.125 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:50:53.125 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:53.125 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:53.125 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:53.125 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:53.125 15:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:53.125 15:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:53.125 15:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:53.125 15:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:53.125 15:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:53.125 15:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:53.125 15:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:53.125 15:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:53.125 15:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:53.125 15:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:53.125 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:50:53.125 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:53.125 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:53.383 nvme0n1 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg0NmU4NjcyY2Q4NmUyNDMyYjNkOTI2NWM5YjYxNmNMxaZR: 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2ZiYzM2ZGM3Mjk4ZDEzOGE3ZWQ4Y2MzYTFhZDI0MTRmZmFjNDMxYWU0ZjI5YjQyMGZiMDYyNGMwOThlZjg3MB6nxQY=: 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg0NmU4NjcyY2Q4NmUyNDMyYjNkOTI2NWM5YjYxNmNMxaZR: 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2ZiYzM2ZGM3Mjk4ZDEzOGE3ZWQ4Y2MzYTFhZDI0MTRmZmFjNDMxYWU0ZjI5YjQyMGZiMDYyNGMwOThlZjg3MB6nxQY=: ]] 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2ZiYzM2ZGM3Mjk4ZDEzOGE3ZWQ4Y2MzYTFhZDI0MTRmZmFjNDMxYWU0ZjI5YjQyMGZiMDYyNGMwOThlZjg3MB6nxQY=: 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:53.383 nvme0n1 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:53.383 15:04:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:53.642 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:53.642 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:53.642 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmUyZGQ2MWQwN2I0ZTA4YjI1MTBlYzgwMDEwNjE4Njk0YjgxMjIxNTQ2ZDdmNGVkEUy1Mw==: 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGNiZDU3NmQwOWEyNTY1YzQzODllMTEwOWRlODRmZDZkZDk1ZDExMDQzMzk4NDMxPNntjA==: 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmUyZGQ2MWQwN2I0ZTA4YjI1MTBlYzgwMDEwNjE4Njk0YjgxMjIxNTQ2ZDdmNGVkEUy1Mw==: 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGNiZDU3NmQwOWEyNTY1YzQzODllMTEwOWRlODRmZDZkZDk1ZDExMDQzMzk4NDMxPNntjA==: ]] 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGNiZDU3NmQwOWEyNTY1YzQzODllMTEwOWRlODRmZDZkZDk1ZDExMDQzMzk4NDMxPNntjA==: 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:53.643 nvme0n1 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWYzYzg1NWRhYTJhY2U1YmU1N2IwNTY3YmRkN2RkNzFJkkB2: 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQ2YzFjZWU0ZmJkMzY1MjQxZTQ0ZWE3YWM5MjBlMDYXUHKG: 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWYzYzg1NWRhYTJhY2U1YmU1N2IwNTY3YmRkN2RkNzFJkkB2: 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQ2YzFjZWU0ZmJkMzY1MjQxZTQ0ZWE3YWM5MjBlMDYXUHKG: ]] 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQ2YzFjZWU0ZmJkMzY1MjQxZTQ0ZWE3YWM5MjBlMDYXUHKG: 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:53.643 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:53.902 nvme0n1 00:50:53.902 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:53.902 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:53.902 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:53.902 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:53.902 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:53.902 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:53.902 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:53.902 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:53.902 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:53.902 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:53.902 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:53.902 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:53.902 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:50:53.902 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:53.902 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:50:53.902 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:50:53.902 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:50:53.902 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDI2MTM3YjE2NTM2N2U0MGZlNTJjNDIwYjJkMGRhMTNmYTE3Nzc0NDI3ZjQ5OWEw15V7Ew==: 00:50:53.902 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDMwOTMzMmQ2ZGE2NDQ5MjFjN2FlNjEyYWQ2OGVlNDaVKlOw: 00:50:53.902 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:50:53.902 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:50:53.902 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDI2MTM3YjE2NTM2N2U0MGZlNTJjNDIwYjJkMGRhMTNmYTE3Nzc0NDI3ZjQ5OWEw15V7Ew==: 00:50:53.902 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDMwOTMzMmQ2ZGE2NDQ5MjFjN2FlNjEyYWQ2OGVlNDaVKlOw: ]] 00:50:53.902 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDMwOTMzMmQ2ZGE2NDQ5MjFjN2FlNjEyYWQ2OGVlNDaVKlOw: 00:50:53.902 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:50:53.902 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:53.902 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:50:53.902 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:50:53.902 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:50:53.902 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:53.902 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:50:53.902 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:53.902 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:53.902 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:53.902 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:53.902 15:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:53.902 15:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:53.902 15:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:53.902 15:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:53.903 15:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:53.903 15:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:53.903 15:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:53.903 15:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:53.903 15:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:53.903 15:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:53.903 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:50:53.903 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:53.903 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:54.162 nvme0n1 00:50:54.162 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:54.162 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:54.162 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:54.162 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:54.162 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:54.162 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:54.162 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:54.162 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:54.162 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:54.162 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:54.162 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:54.162 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:54.162 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:50:54.162 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:54.162 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:50:54.162 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:50:54.162 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:50:54.162 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzA0ODQ2Mjg2MWNmNzM4ZDM4OWY4ODc1Yzg5M2U3NmI4MzFhYWJjMzNmZDJlMTM3ZWFkYWYxNTU3Yzg5NzE3MWUIids=: 00:50:54.162 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:50:54.162 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:50:54.162 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:50:54.162 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzA0ODQ2Mjg2MWNmNzM4ZDM4OWY4ODc1Yzg5M2U3NmI4MzFhYWJjMzNmZDJlMTM3ZWFkYWYxNTU3Yzg5NzE3MWUIids=: 00:50:54.162 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:50:54.162 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:50:54.162 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:54.162 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:50:54.162 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:50:54.162 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:50:54.162 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:54.162 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:50:54.162 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:54.162 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:54.162 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:54.162 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:54.162 15:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:54.162 15:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:54.162 15:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:54.162 15:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:54.162 15:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:54.162 15:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:54.162 15:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:54.162 15:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:54.162 15:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:54.162 15:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:54.162 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:50:54.162 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:54.162 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:54.162 nvme0n1 00:50:54.162 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:54.162 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:54.162 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:54.162 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:54.162 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:54.162 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:54.420 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:54.420 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:54.420 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:54.420 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:54.420 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:54.420 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:50:54.420 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:54.420 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:50:54.420 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:54.420 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:50:54.420 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:50:54.420 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:50:54.420 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg0NmU4NjcyY2Q4NmUyNDMyYjNkOTI2NWM5YjYxNmNMxaZR: 00:50:54.420 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2ZiYzM2ZGM3Mjk4ZDEzOGE3ZWQ4Y2MzYTFhZDI0MTRmZmFjNDMxYWU0ZjI5YjQyMGZiMDYyNGMwOThlZjg3MB6nxQY=: 00:50:54.420 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:50:54.420 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:50:54.420 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg0NmU4NjcyY2Q4NmUyNDMyYjNkOTI2NWM5YjYxNmNMxaZR: 00:50:54.420 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2ZiYzM2ZGM3Mjk4ZDEzOGE3ZWQ4Y2MzYTFhZDI0MTRmZmFjNDMxYWU0ZjI5YjQyMGZiMDYyNGMwOThlZjg3MB6nxQY=: ]] 00:50:54.420 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2ZiYzM2ZGM3Mjk4ZDEzOGE3ZWQ4Y2MzYTFhZDI0MTRmZmFjNDMxYWU0ZjI5YjQyMGZiMDYyNGMwOThlZjg3MB6nxQY=: 00:50:54.420 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:50:54.420 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:54.420 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:50:54.420 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:50:54.420 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:50:54.420 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:54.420 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:50:54.420 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:54.420 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:54.420 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:54.420 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:54.420 15:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:54.420 15:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:54.420 15:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:54.420 15:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:54.420 15:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:54.420 15:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:54.420 15:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:54.420 15:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:54.420 15:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:54.420 15:04:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:54.420 15:04:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:54.420 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:54.420 15:04:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:54.420 nvme0n1 00:50:54.420 15:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:54.420 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:54.420 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:54.420 15:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:54.420 15:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:54.420 15:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:54.678 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:54.678 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:54.678 15:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:54.678 15:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:54.678 15:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:54.678 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:54.678 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:50:54.678 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:54.678 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:50:54.678 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:50:54.678 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:50:54.678 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmUyZGQ2MWQwN2I0ZTA4YjI1MTBlYzgwMDEwNjE4Njk0YjgxMjIxNTQ2ZDdmNGVkEUy1Mw==: 00:50:54.678 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGNiZDU3NmQwOWEyNTY1YzQzODllMTEwOWRlODRmZDZkZDk1ZDExMDQzMzk4NDMxPNntjA==: 00:50:54.678 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:50:54.678 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:50:54.678 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmUyZGQ2MWQwN2I0ZTA4YjI1MTBlYzgwMDEwNjE4Njk0YjgxMjIxNTQ2ZDdmNGVkEUy1Mw==: 00:50:54.678 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGNiZDU3NmQwOWEyNTY1YzQzODllMTEwOWRlODRmZDZkZDk1ZDExMDQzMzk4NDMxPNntjA==: ]] 00:50:54.678 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGNiZDU3NmQwOWEyNTY1YzQzODllMTEwOWRlODRmZDZkZDk1ZDExMDQzMzk4NDMxPNntjA==: 00:50:54.678 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:50:54.678 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:54.678 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:50:54.678 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:50:54.678 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:50:54.678 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:54.678 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:50:54.678 15:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:54.678 15:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:54.678 15:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:54.678 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:54.678 15:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:54.678 15:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:54.678 15:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:54.678 15:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:54.678 15:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:54.678 15:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:54.678 15:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:54.678 15:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:54.678 15:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:54.678 15:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:54.678 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:54.678 15:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:54.678 15:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:54.678 nvme0n1 00:50:54.678 15:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:54.678 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:54.678 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:54.678 15:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:54.678 15:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:54.678 15:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:54.937 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:54.937 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:54.937 15:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:54.937 15:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:54.937 15:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:54.937 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:54.937 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:50:54.937 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:54.937 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:50:54.937 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:50:54.937 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:50:54.937 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWYzYzg1NWRhYTJhY2U1YmU1N2IwNTY3YmRkN2RkNzFJkkB2: 00:50:54.937 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQ2YzFjZWU0ZmJkMzY1MjQxZTQ0ZWE3YWM5MjBlMDYXUHKG: 00:50:54.937 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:50:54.937 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:50:54.937 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWYzYzg1NWRhYTJhY2U1YmU1N2IwNTY3YmRkN2RkNzFJkkB2: 00:50:54.937 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQ2YzFjZWU0ZmJkMzY1MjQxZTQ0ZWE3YWM5MjBlMDYXUHKG: ]] 00:50:54.937 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQ2YzFjZWU0ZmJkMzY1MjQxZTQ0ZWE3YWM5MjBlMDYXUHKG: 00:50:54.937 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:50:54.937 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:54.937 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:50:54.937 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:50:54.937 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:50:54.937 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:54.937 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:50:54.937 15:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:54.937 15:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:54.937 15:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:54.937 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:54.937 15:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:54.937 15:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:54.937 15:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:54.937 15:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:54.937 15:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:54.937 15:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:54.937 15:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:54.937 15:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:54.937 15:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:54.937 15:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:54.937 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:54.937 15:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:54.937 15:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:54.937 nvme0n1 00:50:54.937 15:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:54.937 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:54.937 15:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:54.937 15:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:54.937 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:54.937 15:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:54.937 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:54.937 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:54.937 15:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:54.937 15:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:55.196 15:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:55.196 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:55.196 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:50:55.196 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:55.196 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:50:55.196 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:50:55.196 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:50:55.196 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDI2MTM3YjE2NTM2N2U0MGZlNTJjNDIwYjJkMGRhMTNmYTE3Nzc0NDI3ZjQ5OWEw15V7Ew==: 00:50:55.196 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDMwOTMzMmQ2ZGE2NDQ5MjFjN2FlNjEyYWQ2OGVlNDaVKlOw: 00:50:55.196 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:50:55.196 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:50:55.196 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDI2MTM3YjE2NTM2N2U0MGZlNTJjNDIwYjJkMGRhMTNmYTE3Nzc0NDI3ZjQ5OWEw15V7Ew==: 00:50:55.196 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDMwOTMzMmQ2ZGE2NDQ5MjFjN2FlNjEyYWQ2OGVlNDaVKlOw: ]] 00:50:55.196 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDMwOTMzMmQ2ZGE2NDQ5MjFjN2FlNjEyYWQ2OGVlNDaVKlOw: 00:50:55.196 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:50:55.196 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:55.196 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:50:55.196 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:50:55.196 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:50:55.196 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:55.196 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:50:55.196 15:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:55.196 15:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:55.197 15:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:55.197 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:55.197 15:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:55.197 15:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:55.197 15:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:55.197 15:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:55.197 15:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:55.197 15:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:55.197 15:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:55.197 15:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:55.197 15:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:55.197 15:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:55.197 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:50:55.197 15:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:55.197 15:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:55.197 nvme0n1 00:50:55.197 15:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:55.197 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:55.197 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:55.197 15:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:55.197 15:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:55.197 15:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:55.197 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:55.197 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:55.197 15:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:55.197 15:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:55.456 15:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:55.456 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:55.456 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:50:55.456 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:55.456 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:50:55.456 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:50:55.456 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:50:55.456 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzA0ODQ2Mjg2MWNmNzM4ZDM4OWY4ODc1Yzg5M2U3NmI4MzFhYWJjMzNmZDJlMTM3ZWFkYWYxNTU3Yzg5NzE3MWUIids=: 00:50:55.456 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:50:55.456 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:50:55.456 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:50:55.456 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzA0ODQ2Mjg2MWNmNzM4ZDM4OWY4ODc1Yzg5M2U3NmI4MzFhYWJjMzNmZDJlMTM3ZWFkYWYxNTU3Yzg5NzE3MWUIids=: 00:50:55.456 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:50:55.456 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:50:55.456 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:55.456 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:50:55.456 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:50:55.456 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:50:55.456 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:55.456 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:50:55.456 15:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:55.456 15:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:55.456 15:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:55.456 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:55.456 15:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:55.456 15:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:55.456 15:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:55.456 15:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:55.456 15:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:55.456 15:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:55.456 15:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:55.456 15:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:55.456 15:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:55.456 15:04:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:55.456 15:04:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:50:55.456 15:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:55.456 15:04:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:55.456 nvme0n1 00:50:55.456 15:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:55.456 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:55.456 15:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:55.456 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:55.456 15:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:55.456 15:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:55.456 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:55.456 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:55.456 15:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:55.456 15:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:55.714 15:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:55.714 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:50:55.714 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:55.714 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:50:55.714 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:55.714 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:50:55.714 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:50:55.714 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:50:55.714 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg0NmU4NjcyY2Q4NmUyNDMyYjNkOTI2NWM5YjYxNmNMxaZR: 00:50:55.714 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2ZiYzM2ZGM3Mjk4ZDEzOGE3ZWQ4Y2MzYTFhZDI0MTRmZmFjNDMxYWU0ZjI5YjQyMGZiMDYyNGMwOThlZjg3MB6nxQY=: 00:50:55.714 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:50:55.714 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:50:55.714 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg0NmU4NjcyY2Q4NmUyNDMyYjNkOTI2NWM5YjYxNmNMxaZR: 00:50:55.714 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2ZiYzM2ZGM3Mjk4ZDEzOGE3ZWQ4Y2MzYTFhZDI0MTRmZmFjNDMxYWU0ZjI5YjQyMGZiMDYyNGMwOThlZjg3MB6nxQY=: ]] 00:50:55.715 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2ZiYzM2ZGM3Mjk4ZDEzOGE3ZWQ4Y2MzYTFhZDI0MTRmZmFjNDMxYWU0ZjI5YjQyMGZiMDYyNGMwOThlZjg3MB6nxQY=: 00:50:55.715 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:50:55.715 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:55.715 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:50:55.715 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:50:55.715 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:50:55.715 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:55.715 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:50:55.715 15:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:55.715 15:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:55.715 15:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:55.715 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:55.715 15:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:55.715 15:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:55.715 15:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:55.715 15:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:55.715 15:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:55.715 15:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:55.715 15:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:55.715 15:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:55.715 15:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:55.715 15:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:55.715 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:55.715 15:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:55.715 15:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:55.974 nvme0n1 00:50:55.974 15:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:55.974 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:55.974 15:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:55.974 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:55.974 15:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:55.974 15:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:55.974 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:55.974 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:55.974 15:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:55.974 15:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:55.974 15:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:55.974 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:55.974 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:50:55.974 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:55.974 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:50:55.974 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:50:55.974 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:50:55.974 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmUyZGQ2MWQwN2I0ZTA4YjI1MTBlYzgwMDEwNjE4Njk0YjgxMjIxNTQ2ZDdmNGVkEUy1Mw==: 00:50:55.974 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGNiZDU3NmQwOWEyNTY1YzQzODllMTEwOWRlODRmZDZkZDk1ZDExMDQzMzk4NDMxPNntjA==: 00:50:55.974 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:50:55.974 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:50:55.974 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmUyZGQ2MWQwN2I0ZTA4YjI1MTBlYzgwMDEwNjE4Njk0YjgxMjIxNTQ2ZDdmNGVkEUy1Mw==: 00:50:55.974 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGNiZDU3NmQwOWEyNTY1YzQzODllMTEwOWRlODRmZDZkZDk1ZDExMDQzMzk4NDMxPNntjA==: ]] 00:50:55.974 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGNiZDU3NmQwOWEyNTY1YzQzODllMTEwOWRlODRmZDZkZDk1ZDExMDQzMzk4NDMxPNntjA==: 00:50:55.974 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:50:55.974 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:55.974 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:50:55.974 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:50:55.974 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:50:55.974 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:55.974 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:50:55.974 15:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:55.974 15:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:55.974 15:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:55.974 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:55.974 15:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:55.974 15:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:55.974 15:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:55.974 15:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:55.974 15:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:55.974 15:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:55.974 15:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:55.974 15:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:55.974 15:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:55.974 15:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:55.974 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:55.974 15:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:55.974 15:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:56.234 nvme0n1 00:50:56.234 15:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:56.234 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:56.234 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:56.234 15:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:56.234 15:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:56.234 15:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:56.234 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:56.234 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:56.234 15:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:56.234 15:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:56.234 15:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:56.234 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:56.234 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:50:56.234 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:56.234 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:50:56.234 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:50:56.234 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:50:56.234 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWYzYzg1NWRhYTJhY2U1YmU1N2IwNTY3YmRkN2RkNzFJkkB2: 00:50:56.234 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQ2YzFjZWU0ZmJkMzY1MjQxZTQ0ZWE3YWM5MjBlMDYXUHKG: 00:50:56.234 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:50:56.234 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:50:56.234 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWYzYzg1NWRhYTJhY2U1YmU1N2IwNTY3YmRkN2RkNzFJkkB2: 00:50:56.234 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQ2YzFjZWU0ZmJkMzY1MjQxZTQ0ZWE3YWM5MjBlMDYXUHKG: ]] 00:50:56.234 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQ2YzFjZWU0ZmJkMzY1MjQxZTQ0ZWE3YWM5MjBlMDYXUHKG: 00:50:56.234 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:50:56.234 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:56.234 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:50:56.234 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:50:56.234 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:50:56.234 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:56.234 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:50:56.494 15:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:56.494 15:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:56.494 15:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:56.494 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:56.494 15:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:56.494 15:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:56.494 15:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:56.494 15:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:56.494 15:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:56.494 15:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:56.494 15:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:56.494 15:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:56.494 15:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:56.494 15:04:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:56.494 15:04:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:56.494 15:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:56.494 15:04:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:56.753 nvme0n1 00:50:56.753 15:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:56.753 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:56.753 15:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:56.753 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:56.753 15:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:56.753 15:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:56.753 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:56.753 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:56.753 15:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:56.753 15:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:56.753 15:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:56.753 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:56.753 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:50:56.753 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:56.753 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:50:56.753 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:50:56.753 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:50:56.753 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDI2MTM3YjE2NTM2N2U0MGZlNTJjNDIwYjJkMGRhMTNmYTE3Nzc0NDI3ZjQ5OWEw15V7Ew==: 00:50:56.753 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDMwOTMzMmQ2ZGE2NDQ5MjFjN2FlNjEyYWQ2OGVlNDaVKlOw: 00:50:56.753 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:50:56.753 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:50:56.753 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDI2MTM3YjE2NTM2N2U0MGZlNTJjNDIwYjJkMGRhMTNmYTE3Nzc0NDI3ZjQ5OWEw15V7Ew==: 00:50:56.753 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDMwOTMzMmQ2ZGE2NDQ5MjFjN2FlNjEyYWQ2OGVlNDaVKlOw: ]] 00:50:56.753 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDMwOTMzMmQ2ZGE2NDQ5MjFjN2FlNjEyYWQ2OGVlNDaVKlOw: 00:50:56.753 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:50:56.753 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:56.753 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:50:56.753 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:50:56.753 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:50:56.753 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:56.753 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:50:56.753 15:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:56.753 15:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:56.753 15:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:56.753 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:56.753 15:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:56.753 15:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:56.753 15:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:56.753 15:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:56.753 15:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:56.753 15:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:56.753 15:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:56.753 15:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:56.753 15:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:56.753 15:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:56.753 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:50:56.753 15:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:56.753 15:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:57.012 nvme0n1 00:50:57.012 15:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:57.012 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:57.012 15:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:57.012 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:57.012 15:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:57.012 15:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:57.012 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:57.012 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:57.012 15:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:57.012 15:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:57.012 15:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:57.012 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:57.012 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:50:57.012 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:57.012 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:50:57.012 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:50:57.012 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:50:57.012 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzA0ODQ2Mjg2MWNmNzM4ZDM4OWY4ODc1Yzg5M2U3NmI4MzFhYWJjMzNmZDJlMTM3ZWFkYWYxNTU3Yzg5NzE3MWUIids=: 00:50:57.012 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:50:57.012 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:50:57.012 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:50:57.012 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzA0ODQ2Mjg2MWNmNzM4ZDM4OWY4ODc1Yzg5M2U3NmI4MzFhYWJjMzNmZDJlMTM3ZWFkYWYxNTU3Yzg5NzE3MWUIids=: 00:50:57.013 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:50:57.013 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:50:57.013 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:57.013 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:50:57.013 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:50:57.013 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:50:57.013 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:57.013 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:50:57.013 15:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:57.013 15:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:57.013 15:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:57.013 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:57.013 15:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:57.013 15:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:57.013 15:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:57.013 15:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:57.013 15:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:57.013 15:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:57.013 15:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:57.013 15:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:57.013 15:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:57.013 15:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:57.013 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:50:57.013 15:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:57.013 15:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:57.583 nvme0n1 00:50:57.583 15:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:57.583 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:57.583 15:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:57.583 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:57.583 15:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:57.583 15:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:57.583 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:57.583 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:57.583 15:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:57.583 15:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:57.583 15:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:57.583 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:50:57.583 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:57.583 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:50:57.583 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:57.583 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:50:57.583 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:50:57.583 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:50:57.583 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTg0NmU4NjcyY2Q4NmUyNDMyYjNkOTI2NWM5YjYxNmNMxaZR: 00:50:57.583 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2ZiYzM2ZGM3Mjk4ZDEzOGE3ZWQ4Y2MzYTFhZDI0MTRmZmFjNDMxYWU0ZjI5YjQyMGZiMDYyNGMwOThlZjg3MB6nxQY=: 00:50:57.583 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:50:57.583 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:50:57.583 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTg0NmU4NjcyY2Q4NmUyNDMyYjNkOTI2NWM5YjYxNmNMxaZR: 00:50:57.583 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2ZiYzM2ZGM3Mjk4ZDEzOGE3ZWQ4Y2MzYTFhZDI0MTRmZmFjNDMxYWU0ZjI5YjQyMGZiMDYyNGMwOThlZjg3MB6nxQY=: ]] 00:50:57.583 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2ZiYzM2ZGM3Mjk4ZDEzOGE3ZWQ4Y2MzYTFhZDI0MTRmZmFjNDMxYWU0ZjI5YjQyMGZiMDYyNGMwOThlZjg3MB6nxQY=: 00:50:57.583 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:50:57.583 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:57.583 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:50:57.583 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:50:57.583 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:50:57.583 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:57.583 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:50:57.583 15:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:57.583 15:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:57.583 15:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:57.583 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:57.583 15:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:57.583 15:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:57.583 15:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:57.583 15:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:57.583 15:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:57.583 15:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:57.583 15:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:57.583 15:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:57.583 15:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:57.583 15:04:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:57.583 15:04:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:50:57.583 15:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:57.583 15:04:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:58.153 nvme0n1 00:50:58.153 15:04:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:58.153 15:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:58.153 15:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:58.153 15:04:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:58.153 15:04:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:58.153 15:04:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:58.153 15:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:58.153 15:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:58.153 15:04:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:58.153 15:04:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:58.153 15:04:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:58.153 15:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:58.153 15:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:50:58.153 15:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:58.153 15:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:50:58.153 15:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:50:58.153 15:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:50:58.153 15:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmUyZGQ2MWQwN2I0ZTA4YjI1MTBlYzgwMDEwNjE4Njk0YjgxMjIxNTQ2ZDdmNGVkEUy1Mw==: 00:50:58.153 15:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGNiZDU3NmQwOWEyNTY1YzQzODllMTEwOWRlODRmZDZkZDk1ZDExMDQzMzk4NDMxPNntjA==: 00:50:58.153 15:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:50:58.153 15:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:50:58.153 15:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmUyZGQ2MWQwN2I0ZTA4YjI1MTBlYzgwMDEwNjE4Njk0YjgxMjIxNTQ2ZDdmNGVkEUy1Mw==: 00:50:58.153 15:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGNiZDU3NmQwOWEyNTY1YzQzODllMTEwOWRlODRmZDZkZDk1ZDExMDQzMzk4NDMxPNntjA==: ]] 00:50:58.153 15:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGNiZDU3NmQwOWEyNTY1YzQzODllMTEwOWRlODRmZDZkZDk1ZDExMDQzMzk4NDMxPNntjA==: 00:50:58.153 15:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:50:58.153 15:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:58.153 15:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:50:58.153 15:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:50:58.153 15:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:50:58.153 15:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:58.153 15:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:50:58.153 15:04:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:58.153 15:04:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:58.153 15:04:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:58.153 15:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:58.153 15:04:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:58.153 15:04:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:58.153 15:04:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:58.153 15:04:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:58.153 15:04:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:58.153 15:04:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:58.153 15:04:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:58.153 15:04:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:58.153 15:04:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:58.153 15:04:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:58.153 15:04:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:50:58.153 15:04:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:58.153 15:04:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:58.722 nvme0n1 00:50:58.722 15:04:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:58.722 15:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:58.722 15:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:58.722 15:04:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:58.722 15:04:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:58.722 15:04:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:58.722 15:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:58.722 15:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:58.722 15:04:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:58.722 15:04:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:58.722 15:04:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:58.722 15:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:58.722 15:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:50:58.722 15:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:58.722 15:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:50:58.722 15:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:50:58.722 15:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:50:58.722 15:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YWYzYzg1NWRhYTJhY2U1YmU1N2IwNTY3YmRkN2RkNzFJkkB2: 00:50:58.722 15:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NGQ2YzFjZWU0ZmJkMzY1MjQxZTQ0ZWE3YWM5MjBlMDYXUHKG: 00:50:58.722 15:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:50:58.722 15:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:50:58.722 15:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YWYzYzg1NWRhYTJhY2U1YmU1N2IwNTY3YmRkN2RkNzFJkkB2: 00:50:58.722 15:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NGQ2YzFjZWU0ZmJkMzY1MjQxZTQ0ZWE3YWM5MjBlMDYXUHKG: ]] 00:50:58.722 15:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NGQ2YzFjZWU0ZmJkMzY1MjQxZTQ0ZWE3YWM5MjBlMDYXUHKG: 00:50:58.722 15:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:50:58.722 15:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:58.722 15:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:50:58.722 15:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:50:58.722 15:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:50:58.722 15:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:58.722 15:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:50:58.722 15:04:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:58.722 15:04:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:58.722 15:04:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:58.722 15:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:58.722 15:04:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:58.722 15:04:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:58.722 15:04:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:58.722 15:04:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:58.722 15:04:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:58.722 15:04:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:58.722 15:04:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:58.722 15:04:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:58.722 15:04:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:58.722 15:04:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:58.722 15:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:50:58.722 15:04:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:58.722 15:04:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:59.292 nvme0n1 00:50:59.292 15:04:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:59.292 15:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:59.292 15:04:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:59.292 15:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:59.292 15:04:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:59.292 15:04:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:59.292 15:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:59.292 15:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:59.293 15:04:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:59.293 15:04:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:59.293 15:04:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:59.293 15:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:59.293 15:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:50:59.293 15:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:59.293 15:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:50:59.293 15:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:50:59.293 15:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:50:59.293 15:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDI2MTM3YjE2NTM2N2U0MGZlNTJjNDIwYjJkMGRhMTNmYTE3Nzc0NDI3ZjQ5OWEw15V7Ew==: 00:50:59.293 15:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDMwOTMzMmQ2ZGE2NDQ5MjFjN2FlNjEyYWQ2OGVlNDaVKlOw: 00:50:59.293 15:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:50:59.293 15:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:50:59.293 15:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDI2MTM3YjE2NTM2N2U0MGZlNTJjNDIwYjJkMGRhMTNmYTE3Nzc0NDI3ZjQ5OWEw15V7Ew==: 00:50:59.293 15:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDMwOTMzMmQ2ZGE2NDQ5MjFjN2FlNjEyYWQ2OGVlNDaVKlOw: ]] 00:50:59.293 15:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDMwOTMzMmQ2ZGE2NDQ5MjFjN2FlNjEyYWQ2OGVlNDaVKlOw: 00:50:59.293 15:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:50:59.293 15:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:59.293 15:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:50:59.293 15:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:50:59.293 15:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:50:59.293 15:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:59.293 15:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:50:59.293 15:04:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:59.293 15:04:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:59.293 15:04:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:59.293 15:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:59.293 15:04:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:59.293 15:04:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:59.293 15:04:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:59.293 15:04:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:59.293 15:04:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:59.293 15:04:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:59.293 15:04:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:59.293 15:04:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:59.293 15:04:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:59.293 15:04:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:59.293 15:04:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:50:59.293 15:04:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:59.293 15:04:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:59.869 nvme0n1 00:50:59.869 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:59.869 15:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:50:59.869 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:59.869 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:59.869 15:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:50:59.869 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:59.869 15:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:50:59.869 15:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:50:59.869 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:59.869 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:59.869 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:59.869 15:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:50:59.869 15:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:50:59.869 15:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:50:59.869 15:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:50:59.869 15:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:50:59.869 15:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:50:59.869 15:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzA0ODQ2Mjg2MWNmNzM4ZDM4OWY4ODc1Yzg5M2U3NmI4MzFhYWJjMzNmZDJlMTM3ZWFkYWYxNTU3Yzg5NzE3MWUIids=: 00:50:59.869 15:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:50:59.869 15:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:50:59.869 15:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:50:59.869 15:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzA0ODQ2Mjg2MWNmNzM4ZDM4OWY4ODc1Yzg5M2U3NmI4MzFhYWJjMzNmZDJlMTM3ZWFkYWYxNTU3Yzg5NzE3MWUIids=: 00:50:59.869 15:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:50:59.869 15:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:50:59.869 15:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:50:59.869 15:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:50:59.869 15:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:50:59.869 15:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:50:59.869 15:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:50:59.869 15:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:50:59.869 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:59.869 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:50:59.869 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:50:59.869 15:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:50:59.869 15:04:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:50:59.869 15:04:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:50:59.869 15:04:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:50:59.869 15:04:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:50:59.869 15:04:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:50:59.869 15:04:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:50:59.869 15:04:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:50:59.869 15:04:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:50:59.869 15:04:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:50:59.869 15:04:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:50:59.869 15:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:50:59.869 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:50:59.869 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:00.449 nvme0n1 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmUyZGQ2MWQwN2I0ZTA4YjI1MTBlYzgwMDEwNjE4Njk0YjgxMjIxNTQ2ZDdmNGVkEUy1Mw==: 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGNiZDU3NmQwOWEyNTY1YzQzODllMTEwOWRlODRmZDZkZDk1ZDExMDQzMzk4NDMxPNntjA==: 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmUyZGQ2MWQwN2I0ZTA4YjI1MTBlYzgwMDEwNjE4Njk0YjgxMjIxNTQ2ZDdmNGVkEUy1Mw==: 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGNiZDU3NmQwOWEyNTY1YzQzODllMTEwOWRlODRmZDZkZDk1ZDExMDQzMzk4NDMxPNntjA==: ]] 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGNiZDU3NmQwOWEyNTY1YzQzODllMTEwOWRlODRmZDZkZDk1ZDExMDQzMzk4NDMxPNntjA==: 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:00.449 2024/07/22 15:04:19 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:51:00.449 request: 00:51:00.449 { 00:51:00.449 "method": "bdev_nvme_attach_controller", 00:51:00.449 "params": { 00:51:00.449 "name": "nvme0", 00:51:00.449 "trtype": "tcp", 00:51:00.449 "traddr": "10.0.0.1", 00:51:00.449 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:51:00.449 "adrfam": "ipv4", 00:51:00.449 "trsvcid": "4420", 00:51:00.449 "subnqn": "nqn.2024-02.io.spdk:cnode0" 00:51:00.449 } 00:51:00.449 } 00:51:00.449 Got JSON-RPC error response 00:51:00.449 GoRPCClient: error on JSON-RPC call 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:00.449 2024/07/22 15:04:19 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 dhchap_key:key2 hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:51:00.449 request: 00:51:00.449 { 00:51:00.449 "method": "bdev_nvme_attach_controller", 00:51:00.449 "params": { 00:51:00.449 "name": "nvme0", 00:51:00.449 "trtype": "tcp", 00:51:00.449 "traddr": "10.0.0.1", 00:51:00.449 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:51:00.449 "adrfam": "ipv4", 00:51:00.449 "trsvcid": "4420", 00:51:00.449 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:51:00.449 "dhchap_key": "key2" 00:51:00.449 } 00:51:00.449 } 00:51:00.449 Got JSON-RPC error response 00:51:00.449 GoRPCClient: error on JSON-RPC call 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:00.449 15:04:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:00.449 15:04:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:00.449 15:04:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:51:00.449 15:04:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:51:00.449 15:04:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:51:00.450 15:04:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:51:00.450 15:04:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:51:00.450 15:04:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:51:00.450 15:04:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:51:00.450 15:04:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:51:00.450 15:04:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:51:00.450 15:04:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:51:00.450 15:04:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:51:00.450 15:04:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:51:00.450 15:04:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:51:00.450 15:04:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:51:00.450 15:04:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:51:00.450 15:04:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:51:00.450 15:04:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:51:00.450 15:04:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:51:00.450 15:04:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:51:00.450 15:04:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:51:00.450 15:04:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:00.450 15:04:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:00.450 2024/07/22 15:04:20 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 dhchap_ctrlr_key:ckey2 dhchap_key:key1 hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:51:00.450 request: 00:51:00.450 { 00:51:00.450 "method": "bdev_nvme_attach_controller", 00:51:00.450 "params": { 00:51:00.450 "name": "nvme0", 00:51:00.450 "trtype": "tcp", 00:51:00.450 "traddr": "10.0.0.1", 00:51:00.450 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:51:00.450 "adrfam": "ipv4", 00:51:00.450 "trsvcid": "4420", 00:51:00.450 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:51:00.450 "dhchap_key": "key1", 00:51:00.450 "dhchap_ctrlr_key": "ckey2" 00:51:00.450 } 00:51:00.450 } 00:51:00.450 Got JSON-RPC error response 00:51:00.450 GoRPCClient: error on JSON-RPC call 00:51:00.450 15:04:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:51:00.450 15:04:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:51:00.450 15:04:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:51:00.450 15:04:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:51:00.450 15:04:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:51:00.450 15:04:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:51:00.450 15:04:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:51:00.450 15:04:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:51:00.450 15:04:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:51:00.450 15:04:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:51:00.450 15:04:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:51:00.450 15:04:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:51:00.450 15:04:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:51:00.450 15:04:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:51:00.709 rmmod nvme_tcp 00:51:00.709 rmmod nvme_fabrics 00:51:00.709 15:04:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:51:00.709 15:04:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:51:00.709 15:04:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:51:00.709 15:04:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 109352 ']' 00:51:00.709 15:04:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 109352 00:51:00.709 15:04:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@946 -- # '[' -z 109352 ']' 00:51:00.709 15:04:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@950 -- # kill -0 109352 00:51:00.709 15:04:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # uname 00:51:00.709 15:04:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:51:00.709 15:04:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 109352 00:51:00.709 killing process with pid 109352 00:51:00.709 15:04:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:51:00.709 15:04:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:51:00.709 15:04:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 109352' 00:51:00.709 15:04:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@965 -- # kill 109352 00:51:00.709 15:04:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@970 -- # wait 109352 00:51:00.709 15:04:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:51:00.709 15:04:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:51:00.709 15:04:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:51:00.709 15:04:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:51:00.709 15:04:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:51:00.709 15:04:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:51:00.709 15:04:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:51:00.709 15:04:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:51:00.968 15:04:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:51:00.968 15:04:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:51:00.968 15:04:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:51:00.968 15:04:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:51:00.968 15:04:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:51:00.968 15:04:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:51:00.968 15:04:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:51:00.968 15:04:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:51:00.968 15:04:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:51:00.968 15:04:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:51:00.968 15:04:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:51:00.968 15:04:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:51:00.968 15:04:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:51:01.906 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:51:01.906 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:51:01.906 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:51:01.906 15:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.1Yp /tmp/spdk.key-null.sru /tmp/spdk.key-sha256.YRY /tmp/spdk.key-sha384.YVT /tmp/spdk.key-sha512.9ld /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:51:01.906 15:04:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:51:02.474 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:51:02.474 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:51:02.474 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:51:02.474 00:51:02.474 real 0m32.151s 00:51:02.474 user 0m29.870s 00:51:02.474 sys 0m4.658s 00:51:02.474 15:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:51:02.474 15:04:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:51:02.474 ************************************ 00:51:02.474 END TEST nvmf_auth_host 00:51:02.474 ************************************ 00:51:02.474 15:04:22 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:51:02.474 15:04:22 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:51:02.474 15:04:22 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:51:02.474 15:04:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:51:02.474 15:04:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:51:02.474 ************************************ 00:51:02.474 START TEST nvmf_digest 00:51:02.474 ************************************ 00:51:02.474 15:04:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:51:02.735 * Looking for test storage... 00:51:02.735 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:51:02.735 Cannot find device "nvmf_tgt_br" 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:51:02.735 Cannot find device "nvmf_tgt_br2" 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:51:02.735 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:51:02.735 Cannot find device "nvmf_tgt_br" 00:51:02.736 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:51:02.736 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:51:02.736 Cannot find device "nvmf_tgt_br2" 00:51:02.736 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:51:02.736 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:51:02.736 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:51:02.736 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:51:02.736 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:51:02.736 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:51:02.736 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:51:02.996 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:51:02.996 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:51:02.996 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.109 ms 00:51:02.996 00:51:02.996 --- 10.0.0.2 ping statistics --- 00:51:02.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:51:02.996 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:51:02.996 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:51:02.996 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:51:02.996 00:51:02.996 --- 10.0.0.3 ping statistics --- 00:51:02.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:51:02.996 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:51:02.996 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:51:02.996 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:51:02.996 00:51:02.996 --- 10.0.0.1 ping statistics --- 00:51:02.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:51:02.996 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:51:02.996 ************************************ 00:51:02.996 START TEST nvmf_digest_clean 00:51:02.996 ************************************ 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1121 -- # run_digest 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@720 -- # xtrace_disable 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=110916 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 110916 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 110916 ']' 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:51:02.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:51:02.996 15:04:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:51:03.256 [2024-07-22 15:04:22.652461] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:51:03.256 [2024-07-22 15:04:22.652521] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:51:03.256 [2024-07-22 15:04:22.791017] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:03.256 [2024-07-22 15:04:22.840004] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:51:03.256 [2024-07-22 15:04:22.840056] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:51:03.256 [2024-07-22 15:04:22.840062] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:51:03.256 [2024-07-22 15:04:22.840067] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:51:03.256 [2024-07-22 15:04:22.840070] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:51:03.256 [2024-07-22 15:04:22.840091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:51:04.195 15:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:51:04.195 15:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:51:04.195 15:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:51:04.195 15:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:51:04.195 15:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:51:04.195 15:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:51:04.195 15:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:51:04.195 15:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:51:04.195 15:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:51:04.195 15:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:04.195 15:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:51:04.195 null0 00:51:04.195 [2024-07-22 15:04:23.614332] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:51:04.196 [2024-07-22 15:04:23.638366] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:51:04.196 15:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:04.196 15:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:51:04.196 15:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:51:04.196 15:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:51:04.196 15:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:51:04.196 15:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:51:04.196 15:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:51:04.196 15:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:51:04.196 15:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=110966 00:51:04.196 15:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:51:04.196 15:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 110966 /var/tmp/bperf.sock 00:51:04.196 15:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 110966 ']' 00:51:04.196 15:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:51:04.196 15:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:51:04.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:51:04.196 15:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:51:04.196 15:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:51:04.196 15:04:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:51:04.196 [2024-07-22 15:04:23.698324] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:51:04.196 [2024-07-22 15:04:23.698380] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110966 ] 00:51:04.455 [2024-07-22 15:04:23.835348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:04.455 [2024-07-22 15:04:23.880191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:51:05.028 15:04:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:51:05.028 15:04:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:51:05.028 15:04:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:51:05.028 15:04:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:51:05.028 15:04:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:51:05.308 15:04:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:51:05.308 15:04:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:51:05.568 nvme0n1 00:51:05.568 15:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:51:05.568 15:04:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:51:05.568 Running I/O for 2 seconds... 00:51:08.106 00:51:08.106 Latency(us) 00:51:08.106 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:51:08.106 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:51:08.106 nvme0n1 : 2.00 25436.84 99.36 0.00 0.00 5027.13 2575.65 15568.38 00:51:08.106 =================================================================================================================== 00:51:08.106 Total : 25436.84 99.36 0.00 0.00 5027.13 2575.65 15568.38 00:51:08.106 0 00:51:08.106 15:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:51:08.106 15:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:51:08.106 15:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:51:08.106 15:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:51:08.106 15:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:51:08.106 | select(.opcode=="crc32c") 00:51:08.106 | "\(.module_name) \(.executed)"' 00:51:08.106 15:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:51:08.106 15:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:51:08.106 15:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:51:08.106 15:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:51:08.106 15:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 110966 00:51:08.106 15:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 110966 ']' 00:51:08.106 15:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 110966 00:51:08.106 15:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:51:08.106 15:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:51:08.106 15:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 110966 00:51:08.106 15:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:51:08.106 15:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:51:08.106 killing process with pid 110966 00:51:08.106 15:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 110966' 00:51:08.106 15:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 110966 00:51:08.106 Received shutdown signal, test time was about 2.000000 seconds 00:51:08.106 00:51:08.106 Latency(us) 00:51:08.106 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:51:08.106 =================================================================================================================== 00:51:08.106 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:51:08.106 15:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 110966 00:51:08.107 15:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:51:08.107 15:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:51:08.107 15:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:51:08.107 15:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:51:08.107 15:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:51:08.107 15:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:51:08.107 15:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:51:08.107 15:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=111051 00:51:08.107 15:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 111051 /var/tmp/bperf.sock 00:51:08.107 15:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:51:08.107 15:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 111051 ']' 00:51:08.107 15:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:51:08.107 15:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:51:08.107 15:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:51:08.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:51:08.107 15:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:51:08.107 15:04:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:51:08.107 I/O size of 131072 is greater than zero copy threshold (65536). 00:51:08.107 Zero copy mechanism will not be used. 00:51:08.107 [2024-07-22 15:04:27.604113] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:51:08.107 [2024-07-22 15:04:27.604186] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111051 ] 00:51:08.367 [2024-07-22 15:04:27.742856] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:08.367 [2024-07-22 15:04:27.793096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:51:08.935 15:04:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:51:08.935 15:04:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:51:08.935 15:04:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:51:08.935 15:04:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:51:08.935 15:04:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:51:09.194 15:04:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:51:09.194 15:04:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:51:09.453 nvme0n1 00:51:09.453 15:04:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:51:09.453 15:04:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:51:09.453 I/O size of 131072 is greater than zero copy threshold (65536). 00:51:09.453 Zero copy mechanism will not be used. 00:51:09.453 Running I/O for 2 seconds... 00:51:11.986 00:51:11.986 Latency(us) 00:51:11.986 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:51:11.986 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:51:11.986 nvme0n1 : 2.00 10094.26 1261.78 0.00 0.00 1582.46 461.47 5351.63 00:51:11.986 =================================================================================================================== 00:51:11.986 Total : 10094.26 1261.78 0.00 0.00 1582.46 461.47 5351.63 00:51:11.986 0 00:51:11.986 15:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:51:11.986 15:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:51:11.986 15:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:51:11.986 15:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:51:11.986 15:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:51:11.986 | select(.opcode=="crc32c") 00:51:11.986 | "\(.module_name) \(.executed)"' 00:51:11.986 15:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:51:11.986 15:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:51:11.986 15:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:51:11.986 15:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:51:11.986 15:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 111051 00:51:11.986 15:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 111051 ']' 00:51:11.986 15:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 111051 00:51:11.986 15:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:51:11.986 15:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:51:11.986 15:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 111051 00:51:11.986 15:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:51:11.986 15:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:51:11.986 15:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 111051' 00:51:11.986 killing process with pid 111051 00:51:11.986 15:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 111051 00:51:11.986 Received shutdown signal, test time was about 2.000000 seconds 00:51:11.986 00:51:11.986 Latency(us) 00:51:11.986 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:51:11.986 =================================================================================================================== 00:51:11.986 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:51:11.986 15:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 111051 00:51:11.986 15:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:51:11.986 15:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:51:11.986 15:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:51:11.986 15:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:51:11.986 15:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:51:11.986 15:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:51:11.986 15:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:51:11.986 15:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=111141 00:51:11.986 15:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 111141 /var/tmp/bperf.sock 00:51:11.986 15:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:51:11.986 15:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 111141 ']' 00:51:11.986 15:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:51:11.986 15:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:51:11.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:51:11.986 15:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:51:11.986 15:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:51:11.986 15:04:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:51:11.986 [2024-07-22 15:04:31.539670] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:51:11.986 [2024-07-22 15:04:31.539744] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111141 ] 00:51:12.246 [2024-07-22 15:04:31.677595] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:12.246 [2024-07-22 15:04:31.723307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:51:12.834 15:04:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:51:12.834 15:04:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:51:12.834 15:04:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:51:12.834 15:04:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:51:12.834 15:04:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:51:13.092 15:04:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:51:13.092 15:04:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:51:13.352 nvme0n1 00:51:13.352 15:04:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:51:13.352 15:04:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:51:13.610 Running I/O for 2 seconds... 00:51:15.515 00:51:15.515 Latency(us) 00:51:15.515 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:51:15.515 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:51:15.515 nvme0n1 : 2.01 30158.21 117.81 0.00 0.00 4238.37 2174.99 13679.57 00:51:15.515 =================================================================================================================== 00:51:15.515 Total : 30158.21 117.81 0.00 0.00 4238.37 2174.99 13679.57 00:51:15.515 0 00:51:15.515 15:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:51:15.515 15:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:51:15.515 15:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:51:15.515 15:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:51:15.515 15:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:51:15.515 | select(.opcode=="crc32c") 00:51:15.515 | "\(.module_name) \(.executed)"' 00:51:15.774 15:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:51:15.774 15:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:51:15.774 15:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:51:15.774 15:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:51:15.774 15:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 111141 00:51:15.774 15:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 111141 ']' 00:51:15.774 15:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 111141 00:51:15.774 15:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:51:15.774 15:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:51:15.774 15:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 111141 00:51:15.774 15:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:51:15.774 15:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:51:15.774 killing process with pid 111141 00:51:15.774 15:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 111141' 00:51:15.774 15:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 111141 00:51:15.774 Received shutdown signal, test time was about 2.000000 seconds 00:51:15.774 00:51:15.774 Latency(us) 00:51:15.774 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:51:15.774 =================================================================================================================== 00:51:15.774 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:51:15.774 15:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 111141 00:51:16.033 15:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:51:16.033 15:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:51:16.033 15:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:51:16.034 15:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:51:16.034 15:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:51:16.034 15:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:51:16.034 15:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:51:16.034 15:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=111226 00:51:16.034 15:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:51:16.034 15:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 111226 /var/tmp/bperf.sock 00:51:16.034 15:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 111226 ']' 00:51:16.034 15:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:51:16.034 15:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:51:16.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:51:16.034 15:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:51:16.034 15:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:51:16.034 15:04:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:51:16.034 I/O size of 131072 is greater than zero copy threshold (65536). 00:51:16.034 Zero copy mechanism will not be used. 00:51:16.034 [2024-07-22 15:04:35.532326] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:51:16.034 [2024-07-22 15:04:35.532394] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111226 ] 00:51:16.292 [2024-07-22 15:04:35.670382] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:16.292 [2024-07-22 15:04:35.720530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:51:16.860 15:04:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:51:16.860 15:04:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:51:16.860 15:04:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:51:16.860 15:04:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:51:16.860 15:04:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:51:17.120 15:04:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:51:17.120 15:04:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:51:17.380 nvme0n1 00:51:17.380 15:04:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:51:17.380 15:04:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:51:17.380 I/O size of 131072 is greater than zero copy threshold (65536). 00:51:17.380 Zero copy mechanism will not be used. 00:51:17.380 Running I/O for 2 seconds... 00:51:19.927 00:51:19.927 Latency(us) 00:51:19.927 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:51:19.927 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:51:19.927 nvme0n1 : 2.00 10383.82 1297.98 0.00 0.00 1537.71 1144.73 7183.20 00:51:19.927 =================================================================================================================== 00:51:19.927 Total : 10383.82 1297.98 0.00 0.00 1537.71 1144.73 7183.20 00:51:19.927 0 00:51:19.927 15:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:51:19.927 15:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:51:19.927 15:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:51:19.927 15:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:51:19.927 15:04:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:51:19.927 | select(.opcode=="crc32c") 00:51:19.927 | "\(.module_name) \(.executed)"' 00:51:19.927 15:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:51:19.927 15:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:51:19.927 15:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:51:19.927 15:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:51:19.927 15:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 111226 00:51:19.927 15:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 111226 ']' 00:51:19.927 15:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 111226 00:51:19.927 15:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:51:19.927 15:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:51:19.927 15:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 111226 00:51:19.927 15:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:51:19.927 15:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:51:19.927 15:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 111226' 00:51:19.927 killing process with pid 111226 00:51:19.927 15:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 111226 00:51:19.927 Received shutdown signal, test time was about 2.000000 seconds 00:51:19.927 00:51:19.927 Latency(us) 00:51:19.927 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:51:19.927 =================================================================================================================== 00:51:19.927 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:51:19.927 15:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 111226 00:51:19.927 15:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 110916 00:51:19.927 15:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 110916 ']' 00:51:19.927 15:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 110916 00:51:19.927 15:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:51:19.927 15:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:51:19.927 15:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 110916 00:51:19.927 killing process with pid 110916 00:51:19.927 15:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:51:19.927 15:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:51:19.927 15:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 110916' 00:51:19.927 15:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 110916 00:51:19.927 15:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 110916 00:51:20.186 00:51:20.186 real 0m17.016s 00:51:20.186 user 0m31.774s 00:51:20.186 sys 0m4.355s 00:51:20.186 15:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:51:20.186 ************************************ 00:51:20.186 15:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:51:20.186 END TEST nvmf_digest_clean 00:51:20.186 ************************************ 00:51:20.186 15:04:39 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:51:20.186 15:04:39 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:51:20.186 15:04:39 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:51:20.186 15:04:39 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:51:20.186 ************************************ 00:51:20.186 START TEST nvmf_digest_error 00:51:20.186 ************************************ 00:51:20.186 15:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1121 -- # run_digest_error 00:51:20.186 15:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:51:20.186 15:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:51:20.186 15:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@720 -- # xtrace_disable 00:51:20.186 15:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:51:20.186 15:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=111335 00:51:20.186 15:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:51:20.186 15:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 111335 00:51:20.186 15:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 111335 ']' 00:51:20.186 15:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:51:20.186 15:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:51:20.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:51:20.186 15:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:51:20.186 15:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:51:20.186 15:04:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:51:20.186 [2024-07-22 15:04:39.734217] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:51:20.186 [2024-07-22 15:04:39.734284] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:51:20.445 [2024-07-22 15:04:39.874944] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:20.445 [2024-07-22 15:04:39.923435] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:51:20.445 [2024-07-22 15:04:39.923486] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:51:20.445 [2024-07-22 15:04:39.923492] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:51:20.445 [2024-07-22 15:04:39.923496] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:51:20.445 [2024-07-22 15:04:39.923500] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:51:20.445 [2024-07-22 15:04:39.923519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:51:21.015 15:04:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:51:21.015 15:04:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:51:21.015 15:04:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:51:21.015 15:04:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:51:21.015 15:04:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:51:21.015 15:04:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:51:21.015 15:04:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:51:21.015 15:04:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:21.015 15:04:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:51:21.015 [2024-07-22 15:04:40.634525] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:51:21.015 15:04:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:21.015 15:04:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:51:21.015 15:04:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:51:21.015 15:04:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:21.015 15:04:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:51:21.274 null0 00:51:21.274 [2024-07-22 15:04:40.722614] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:51:21.274 [2024-07-22 15:04:40.746647] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:51:21.274 15:04:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:21.274 15:04:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:51:21.274 15:04:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:51:21.274 15:04:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:51:21.274 15:04:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:51:21.274 15:04:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:51:21.274 15:04:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=111384 00:51:21.274 15:04:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:51:21.274 15:04:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 111384 /var/tmp/bperf.sock 00:51:21.274 15:04:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 111384 ']' 00:51:21.274 15:04:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:51:21.274 15:04:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:51:21.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:51:21.274 15:04:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:51:21.274 15:04:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:51:21.274 15:04:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:51:21.274 [2024-07-22 15:04:40.805778] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:51:21.274 [2024-07-22 15:04:40.805844] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111384 ] 00:51:21.533 [2024-07-22 15:04:40.945037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:21.533 [2024-07-22 15:04:40.994467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:51:22.102 15:04:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:51:22.102 15:04:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:51:22.102 15:04:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:51:22.102 15:04:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:51:22.384 15:04:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:51:22.385 15:04:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:22.385 15:04:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:51:22.385 15:04:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:22.385 15:04:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:51:22.385 15:04:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:51:22.642 nvme0n1 00:51:22.642 15:04:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:51:22.642 15:04:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:22.642 15:04:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:51:22.642 15:04:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:22.642 15:04:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:51:22.642 15:04:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:51:22.642 Running I/O for 2 seconds... 00:51:22.642 [2024-07-22 15:04:42.213478] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:22.642 [2024-07-22 15:04:42.213540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:11795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:22.642 [2024-07-22 15:04:42.213549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:22.642 [2024-07-22 15:04:42.224052] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:22.642 [2024-07-22 15:04:42.224086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:25235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:22.642 [2024-07-22 15:04:42.224110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:22.642 [2024-07-22 15:04:42.234941] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:22.642 [2024-07-22 15:04:42.234972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:22.642 [2024-07-22 15:04:42.234979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:22.642 [2024-07-22 15:04:42.244198] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:22.642 [2024-07-22 15:04:42.244232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:22.642 [2024-07-22 15:04:42.244240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:22.642 [2024-07-22 15:04:42.256660] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:22.642 [2024-07-22 15:04:42.256701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:22.642 [2024-07-22 15:04:42.256709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:22.642 [2024-07-22 15:04:42.265680] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:22.642 [2024-07-22 15:04:42.265710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:22.642 [2024-07-22 15:04:42.265718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:22.901 [2024-07-22 15:04:42.276230] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:22.901 [2024-07-22 15:04:42.276262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:22.901 [2024-07-22 15:04:42.276285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:22.901 [2024-07-22 15:04:42.286554] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:22.901 [2024-07-22 15:04:42.286586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:22.901 [2024-07-22 15:04:42.286594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:22.901 [2024-07-22 15:04:42.297879] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:22.901 [2024-07-22 15:04:42.297913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:22.901 [2024-07-22 15:04:42.297921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:22.901 [2024-07-22 15:04:42.306817] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:22.901 [2024-07-22 15:04:42.306850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:22.901 [2024-07-22 15:04:42.306874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:22.901 [2024-07-22 15:04:42.317965] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:22.901 [2024-07-22 15:04:42.317994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:22.901 [2024-07-22 15:04:42.318002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:22.901 [2024-07-22 15:04:42.329389] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:22.901 [2024-07-22 15:04:42.329424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:25458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:22.901 [2024-07-22 15:04:42.329431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:22.901 [2024-07-22 15:04:42.340104] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:22.901 [2024-07-22 15:04:42.340133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:24700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:22.901 [2024-07-22 15:04:42.340140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:22.901 [2024-07-22 15:04:42.350602] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:22.901 [2024-07-22 15:04:42.350633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:22.901 [2024-07-22 15:04:42.350641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:22.901 [2024-07-22 15:04:42.359903] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:22.901 [2024-07-22 15:04:42.359932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:22.901 [2024-07-22 15:04:42.359955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:22.901 [2024-07-22 15:04:42.369990] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:22.901 [2024-07-22 15:04:42.370021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:22.901 [2024-07-22 15:04:42.370028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:22.901 [2024-07-22 15:04:42.380021] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:22.901 [2024-07-22 15:04:42.380054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:22.901 [2024-07-22 15:04:42.380061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:22.901 [2024-07-22 15:04:42.390086] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:22.901 [2024-07-22 15:04:42.390126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:22.901 [2024-07-22 15:04:42.390134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:22.901 [2024-07-22 15:04:42.399341] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:22.901 [2024-07-22 15:04:42.399373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:18215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:22.901 [2024-07-22 15:04:42.399396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:22.901 [2024-07-22 15:04:42.408961] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:22.901 [2024-07-22 15:04:42.408993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:22.901 [2024-07-22 15:04:42.409016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:22.901 [2024-07-22 15:04:42.417858] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:22.901 [2024-07-22 15:04:42.417890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:22.901 [2024-07-22 15:04:42.417898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:22.902 [2024-07-22 15:04:42.429392] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:22.902 [2024-07-22 15:04:42.429433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:22.902 [2024-07-22 15:04:42.429458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:22.902 [2024-07-22 15:04:42.439143] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:22.902 [2024-07-22 15:04:42.439173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:22.902 [2024-07-22 15:04:42.439181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:22.902 [2024-07-22 15:04:42.447797] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:22.902 [2024-07-22 15:04:42.447827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:22.902 [2024-07-22 15:04:42.447850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:22.902 [2024-07-22 15:04:42.457948] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:22.902 [2024-07-22 15:04:42.457979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:22.902 [2024-07-22 15:04:42.457987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:22.902 [2024-07-22 15:04:42.468232] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:22.902 [2024-07-22 15:04:42.468264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:22.902 [2024-07-22 15:04:42.468271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:22.902 [2024-07-22 15:04:42.478724] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:22.902 [2024-07-22 15:04:42.478755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:22.902 [2024-07-22 15:04:42.478762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:22.902 [2024-07-22 15:04:42.489150] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:22.902 [2024-07-22 15:04:42.489182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:22.902 [2024-07-22 15:04:42.489205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:22.902 [2024-07-22 15:04:42.498112] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:22.902 [2024-07-22 15:04:42.498142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:22.902 [2024-07-22 15:04:42.498149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:22.902 [2024-07-22 15:04:42.508307] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:22.902 [2024-07-22 15:04:42.508340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:22.902 [2024-07-22 15:04:42.508348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:22.902 [2024-07-22 15:04:42.518413] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:22.902 [2024-07-22 15:04:42.518445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:22.902 [2024-07-22 15:04:42.518469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:22.902 [2024-07-22 15:04:42.527293] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:22.902 [2024-07-22 15:04:42.527326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:22.902 [2024-07-22 15:04:42.527333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.161 [2024-07-22 15:04:42.539566] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.161 [2024-07-22 15:04:42.539601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.161 [2024-07-22 15:04:42.539624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.162 [2024-07-22 15:04:42.549879] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.162 [2024-07-22 15:04:42.549911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.162 [2024-07-22 15:04:42.549918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.162 [2024-07-22 15:04:42.560053] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.162 [2024-07-22 15:04:42.560085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:24825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.162 [2024-07-22 15:04:42.560093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.162 [2024-07-22 15:04:42.569613] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.162 [2024-07-22 15:04:42.569645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.162 [2024-07-22 15:04:42.569653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.162 [2024-07-22 15:04:42.579214] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.162 [2024-07-22 15:04:42.579250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.162 [2024-07-22 15:04:42.579257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.162 [2024-07-22 15:04:42.588888] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.162 [2024-07-22 15:04:42.588922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.162 [2024-07-22 15:04:42.588946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.162 [2024-07-22 15:04:42.600672] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.162 [2024-07-22 15:04:42.600718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.162 [2024-07-22 15:04:42.600743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.162 [2024-07-22 15:04:42.611793] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.162 [2024-07-22 15:04:42.611832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.162 [2024-07-22 15:04:42.611840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.162 [2024-07-22 15:04:42.623357] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.162 [2024-07-22 15:04:42.623423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:2432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.162 [2024-07-22 15:04:42.623434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.162 [2024-07-22 15:04:42.634942] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.162 [2024-07-22 15:04:42.634987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.162 [2024-07-22 15:04:42.634996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.162 [2024-07-22 15:04:42.645080] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.162 [2024-07-22 15:04:42.645123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.162 [2024-07-22 15:04:42.645149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.162 [2024-07-22 15:04:42.655449] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.162 [2024-07-22 15:04:42.655487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.162 [2024-07-22 15:04:42.655495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.162 [2024-07-22 15:04:42.667564] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.162 [2024-07-22 15:04:42.667601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:14208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.162 [2024-07-22 15:04:42.667610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.162 [2024-07-22 15:04:42.678484] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.162 [2024-07-22 15:04:42.678522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.162 [2024-07-22 15:04:42.678531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.162 [2024-07-22 15:04:42.689922] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.162 [2024-07-22 15:04:42.689957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:14035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.162 [2024-07-22 15:04:42.689981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.162 [2024-07-22 15:04:42.699291] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.162 [2024-07-22 15:04:42.699325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.162 [2024-07-22 15:04:42.699349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.162 [2024-07-22 15:04:42.708927] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.162 [2024-07-22 15:04:42.708965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.162 [2024-07-22 15:04:42.708973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.162 [2024-07-22 15:04:42.719648] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.162 [2024-07-22 15:04:42.719692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.162 [2024-07-22 15:04:42.719700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.162 [2024-07-22 15:04:42.729996] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.162 [2024-07-22 15:04:42.730033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.162 [2024-07-22 15:04:42.730058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.162 [2024-07-22 15:04:42.739480] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.162 [2024-07-22 15:04:42.739518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.162 [2024-07-22 15:04:42.739542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.162 [2024-07-22 15:04:42.750550] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.162 [2024-07-22 15:04:42.750583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.162 [2024-07-22 15:04:42.750606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.162 [2024-07-22 15:04:42.759438] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.162 [2024-07-22 15:04:42.759471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.162 [2024-07-22 15:04:42.759494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.162 [2024-07-22 15:04:42.770301] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.162 [2024-07-22 15:04:42.770333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.162 [2024-07-22 15:04:42.770356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.162 [2024-07-22 15:04:42.780875] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.162 [2024-07-22 15:04:42.780929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.162 [2024-07-22 15:04:42.780938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.162 [2024-07-22 15:04:42.790562] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.162 [2024-07-22 15:04:42.790595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:8472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.162 [2024-07-22 15:04:42.790602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.423 [2024-07-22 15:04:42.800451] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.423 [2024-07-22 15:04:42.800482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.423 [2024-07-22 15:04:42.800490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.423 [2024-07-22 15:04:42.810436] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.423 [2024-07-22 15:04:42.810467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.423 [2024-07-22 15:04:42.810474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.423 [2024-07-22 15:04:42.820654] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.423 [2024-07-22 15:04:42.820710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.423 [2024-07-22 15:04:42.820717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.423 [2024-07-22 15:04:42.829696] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.423 [2024-07-22 15:04:42.829726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.423 [2024-07-22 15:04:42.829749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.423 [2024-07-22 15:04:42.839983] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.423 [2024-07-22 15:04:42.840049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.423 [2024-07-22 15:04:42.840056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.423 [2024-07-22 15:04:42.850726] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.423 [2024-07-22 15:04:42.850758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:24781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.423 [2024-07-22 15:04:42.850782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.423 [2024-07-22 15:04:42.860747] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.423 [2024-07-22 15:04:42.860778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.423 [2024-07-22 15:04:42.860801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.423 [2024-07-22 15:04:42.868915] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.423 [2024-07-22 15:04:42.868946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.423 [2024-07-22 15:04:42.868970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.423 [2024-07-22 15:04:42.879719] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.423 [2024-07-22 15:04:42.879755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.423 [2024-07-22 15:04:42.879763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.423 [2024-07-22 15:04:42.890422] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.423 [2024-07-22 15:04:42.890465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:18054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.423 [2024-07-22 15:04:42.890474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.423 [2024-07-22 15:04:42.900734] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.423 [2024-07-22 15:04:42.900766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.423 [2024-07-22 15:04:42.900789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.423 [2024-07-22 15:04:42.910549] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.423 [2024-07-22 15:04:42.910581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.423 [2024-07-22 15:04:42.910605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.423 [2024-07-22 15:04:42.919274] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.423 [2024-07-22 15:04:42.919307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.423 [2024-07-22 15:04:42.919315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.423 [2024-07-22 15:04:42.929876] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.423 [2024-07-22 15:04:42.929908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.423 [2024-07-22 15:04:42.929931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.423 [2024-07-22 15:04:42.938505] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.423 [2024-07-22 15:04:42.938538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.423 [2024-07-22 15:04:42.938544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.423 [2024-07-22 15:04:42.949274] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.423 [2024-07-22 15:04:42.949309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.423 [2024-07-22 15:04:42.949316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.423 [2024-07-22 15:04:42.960267] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.423 [2024-07-22 15:04:42.960299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.423 [2024-07-22 15:04:42.960323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.423 [2024-07-22 15:04:42.970572] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.423 [2024-07-22 15:04:42.970608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.423 [2024-07-22 15:04:42.970616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.423 [2024-07-22 15:04:42.980030] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.423 [2024-07-22 15:04:42.980076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.423 [2024-07-22 15:04:42.980101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.423 [2024-07-22 15:04:42.991870] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.423 [2024-07-22 15:04:42.991903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.423 [2024-07-22 15:04:42.991910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.423 [2024-07-22 15:04:43.000800] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.423 [2024-07-22 15:04:43.000831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.423 [2024-07-22 15:04:43.000839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.424 [2024-07-22 15:04:43.012880] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.424 [2024-07-22 15:04:43.012913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.424 [2024-07-22 15:04:43.012921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.424 [2024-07-22 15:04:43.022298] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.424 [2024-07-22 15:04:43.022329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.424 [2024-07-22 15:04:43.022353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.424 [2024-07-22 15:04:43.031528] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.424 [2024-07-22 15:04:43.031560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.424 [2024-07-22 15:04:43.031583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.424 [2024-07-22 15:04:43.042416] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.424 [2024-07-22 15:04:43.042450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.424 [2024-07-22 15:04:43.042458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.424 [2024-07-22 15:04:43.051390] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.424 [2024-07-22 15:04:43.051423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.424 [2024-07-22 15:04:43.051431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.683 [2024-07-22 15:04:43.060690] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.683 [2024-07-22 15:04:43.060721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:24828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.683 [2024-07-22 15:04:43.060744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.683 [2024-07-22 15:04:43.072249] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.683 [2024-07-22 15:04:43.072281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.684 [2024-07-22 15:04:43.072288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.684 [2024-07-22 15:04:43.082629] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.684 [2024-07-22 15:04:43.082661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:7538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.684 [2024-07-22 15:04:43.082677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.684 [2024-07-22 15:04:43.093696] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.684 [2024-07-22 15:04:43.093729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.684 [2024-07-22 15:04:43.093752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.684 [2024-07-22 15:04:43.104054] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.684 [2024-07-22 15:04:43.104085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.684 [2024-07-22 15:04:43.104092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.684 [2024-07-22 15:04:43.113023] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.684 [2024-07-22 15:04:43.113055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.684 [2024-07-22 15:04:43.113063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.684 [2024-07-22 15:04:43.124071] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.684 [2024-07-22 15:04:43.124127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.684 [2024-07-22 15:04:43.124136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.684 [2024-07-22 15:04:43.135961] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.684 [2024-07-22 15:04:43.136008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.684 [2024-07-22 15:04:43.136017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.684 [2024-07-22 15:04:43.146663] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.684 [2024-07-22 15:04:43.146709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.684 [2024-07-22 15:04:43.146718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.684 [2024-07-22 15:04:43.157614] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.684 [2024-07-22 15:04:43.157650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.684 [2024-07-22 15:04:43.157659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.684 [2024-07-22 15:04:43.168140] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.684 [2024-07-22 15:04:43.168174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.684 [2024-07-22 15:04:43.168182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.684 [2024-07-22 15:04:43.179633] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.684 [2024-07-22 15:04:43.179677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.684 [2024-07-22 15:04:43.179685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.684 [2024-07-22 15:04:43.189057] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.684 [2024-07-22 15:04:43.189102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.684 [2024-07-22 15:04:43.189110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.684 [2024-07-22 15:04:43.197698] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.684 [2024-07-22 15:04:43.197743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.684 [2024-07-22 15:04:43.197751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.684 [2024-07-22 15:04:43.208932] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.684 [2024-07-22 15:04:43.208972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.684 [2024-07-22 15:04:43.208981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.684 [2024-07-22 15:04:43.218892] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.684 [2024-07-22 15:04:43.218924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.684 [2024-07-22 15:04:43.218932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.684 [2024-07-22 15:04:43.228300] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.684 [2024-07-22 15:04:43.228332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.684 [2024-07-22 15:04:43.228340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.684 [2024-07-22 15:04:43.238538] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.684 [2024-07-22 15:04:43.238571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.684 [2024-07-22 15:04:43.238579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.684 [2024-07-22 15:04:43.248954] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.684 [2024-07-22 15:04:43.248987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.684 [2024-07-22 15:04:43.248995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.684 [2024-07-22 15:04:43.257582] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.684 [2024-07-22 15:04:43.257616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:6278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.684 [2024-07-22 15:04:43.257624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.684 [2024-07-22 15:04:43.268384] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.684 [2024-07-22 15:04:43.268416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.684 [2024-07-22 15:04:43.268424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.684 [2024-07-22 15:04:43.279562] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.684 [2024-07-22 15:04:43.279596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.684 [2024-07-22 15:04:43.279603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.684 [2024-07-22 15:04:43.289904] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.684 [2024-07-22 15:04:43.289937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.684 [2024-07-22 15:04:43.289944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.684 [2024-07-22 15:04:43.300416] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.684 [2024-07-22 15:04:43.300445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.684 [2024-07-22 15:04:43.300452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.684 [2024-07-22 15:04:43.310327] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.684 [2024-07-22 15:04:43.310362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.684 [2024-07-22 15:04:43.310370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.944 [2024-07-22 15:04:43.319740] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.944 [2024-07-22 15:04:43.319781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:8378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.944 [2024-07-22 15:04:43.319790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.944 [2024-07-22 15:04:43.329945] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.944 [2024-07-22 15:04:43.329984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.944 [2024-07-22 15:04:43.329993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.944 [2024-07-22 15:04:43.340247] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.944 [2024-07-22 15:04:43.340278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.944 [2024-07-22 15:04:43.340286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.944 [2024-07-22 15:04:43.351584] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.944 [2024-07-22 15:04:43.351618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.944 [2024-07-22 15:04:43.351626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.944 [2024-07-22 15:04:43.361866] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.944 [2024-07-22 15:04:43.361900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.944 [2024-07-22 15:04:43.361908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.944 [2024-07-22 15:04:43.371403] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.944 [2024-07-22 15:04:43.371437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.944 [2024-07-22 15:04:43.371461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.944 [2024-07-22 15:04:43.382483] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.944 [2024-07-22 15:04:43.382517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.944 [2024-07-22 15:04:43.382524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.944 [2024-07-22 15:04:43.392234] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.944 [2024-07-22 15:04:43.392267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.944 [2024-07-22 15:04:43.392275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.944 [2024-07-22 15:04:43.402549] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.944 [2024-07-22 15:04:43.402583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.944 [2024-07-22 15:04:43.402591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.944 [2024-07-22 15:04:43.411616] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.944 [2024-07-22 15:04:43.411650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.944 [2024-07-22 15:04:43.411657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.944 [2024-07-22 15:04:43.422409] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.944 [2024-07-22 15:04:43.422441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.944 [2024-07-22 15:04:43.422450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.944 [2024-07-22 15:04:43.432456] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.944 [2024-07-22 15:04:43.432491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.944 [2024-07-22 15:04:43.432499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.944 [2024-07-22 15:04:43.442397] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.944 [2024-07-22 15:04:43.442430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.944 [2024-07-22 15:04:43.442453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.944 [2024-07-22 15:04:43.452005] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.944 [2024-07-22 15:04:43.452037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.944 [2024-07-22 15:04:43.452045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.944 [2024-07-22 15:04:43.462520] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.944 [2024-07-22 15:04:43.462554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:4612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.944 [2024-07-22 15:04:43.462562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.944 [2024-07-22 15:04:43.472014] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.944 [2024-07-22 15:04:43.472044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.944 [2024-07-22 15:04:43.472052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.944 [2024-07-22 15:04:43.481286] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.944 [2024-07-22 15:04:43.481320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:14798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.944 [2024-07-22 15:04:43.481327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.944 [2024-07-22 15:04:43.492823] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.944 [2024-07-22 15:04:43.492855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:99 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.944 [2024-07-22 15:04:43.492863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.944 [2024-07-22 15:04:43.502087] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.944 [2024-07-22 15:04:43.502120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.944 [2024-07-22 15:04:43.502143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.944 [2024-07-22 15:04:43.511120] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.944 [2024-07-22 15:04:43.511153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.944 [2024-07-22 15:04:43.511176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.944 [2024-07-22 15:04:43.522168] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.944 [2024-07-22 15:04:43.522206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.944 [2024-07-22 15:04:43.522214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.944 [2024-07-22 15:04:43.532330] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.944 [2024-07-22 15:04:43.532363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.944 [2024-07-22 15:04:43.532370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.944 [2024-07-22 15:04:43.540712] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.944 [2024-07-22 15:04:43.540744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.944 [2024-07-22 15:04:43.540752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.944 [2024-07-22 15:04:43.551329] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.944 [2024-07-22 15:04:43.551362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.944 [2024-07-22 15:04:43.551369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.944 [2024-07-22 15:04:43.560548] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.944 [2024-07-22 15:04:43.560582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.944 [2024-07-22 15:04:43.560589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:23.944 [2024-07-22 15:04:43.572213] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:23.944 [2024-07-22 15:04:43.572246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:23.944 [2024-07-22 15:04:43.572253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.204 [2024-07-22 15:04:43.581440] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.205 [2024-07-22 15:04:43.581472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.205 [2024-07-22 15:04:43.581480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.205 [2024-07-22 15:04:43.592269] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.205 [2024-07-22 15:04:43.592299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.205 [2024-07-22 15:04:43.592306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.205 [2024-07-22 15:04:43.602787] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.205 [2024-07-22 15:04:43.602817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.205 [2024-07-22 15:04:43.602840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.205 [2024-07-22 15:04:43.611909] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.205 [2024-07-22 15:04:43.611942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:25459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.205 [2024-07-22 15:04:43.611951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.205 [2024-07-22 15:04:43.621942] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.205 [2024-07-22 15:04:43.621975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.205 [2024-07-22 15:04:43.621998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.205 [2024-07-22 15:04:43.632199] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.205 [2024-07-22 15:04:43.632231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.205 [2024-07-22 15:04:43.632238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.205 [2024-07-22 15:04:43.642025] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.205 [2024-07-22 15:04:43.642056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:7407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.205 [2024-07-22 15:04:43.642079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.205 [2024-07-22 15:04:43.652722] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.205 [2024-07-22 15:04:43.652752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.205 [2024-07-22 15:04:43.652759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.205 [2024-07-22 15:04:43.661476] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.205 [2024-07-22 15:04:43.661509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:79 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.205 [2024-07-22 15:04:43.661516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.205 [2024-07-22 15:04:43.671643] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.205 [2024-07-22 15:04:43.671681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.205 [2024-07-22 15:04:43.671705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.205 [2024-07-22 15:04:43.682117] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.205 [2024-07-22 15:04:43.682147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.205 [2024-07-22 15:04:43.682170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.205 [2024-07-22 15:04:43.692210] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.205 [2024-07-22 15:04:43.692242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:23572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.205 [2024-07-22 15:04:43.692250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.205 [2024-07-22 15:04:43.701139] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.205 [2024-07-22 15:04:43.701171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.205 [2024-07-22 15:04:43.701179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.205 [2024-07-22 15:04:43.710942] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.205 [2024-07-22 15:04:43.710973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.205 [2024-07-22 15:04:43.710996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.205 [2024-07-22 15:04:43.721311] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.205 [2024-07-22 15:04:43.721346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.205 [2024-07-22 15:04:43.721353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.205 [2024-07-22 15:04:43.731122] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.205 [2024-07-22 15:04:43.731153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.205 [2024-07-22 15:04:43.731176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.205 [2024-07-22 15:04:43.741802] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.205 [2024-07-22 15:04:43.741833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.205 [2024-07-22 15:04:43.741841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.205 [2024-07-22 15:04:43.751356] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.205 [2024-07-22 15:04:43.751387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.205 [2024-07-22 15:04:43.751410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.205 [2024-07-22 15:04:43.761447] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.205 [2024-07-22 15:04:43.761480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.205 [2024-07-22 15:04:43.761487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.205 [2024-07-22 15:04:43.771572] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.205 [2024-07-22 15:04:43.771606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.205 [2024-07-22 15:04:43.771629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.205 [2024-07-22 15:04:43.780019] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.205 [2024-07-22 15:04:43.780049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.205 [2024-07-22 15:04:43.780057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.205 [2024-07-22 15:04:43.791031] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.205 [2024-07-22 15:04:43.791064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:17968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.205 [2024-07-22 15:04:43.791071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.205 [2024-07-22 15:04:43.800272] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.205 [2024-07-22 15:04:43.800304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:8960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.205 [2024-07-22 15:04:43.800312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.205 [2024-07-22 15:04:43.810936] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.205 [2024-07-22 15:04:43.810968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.205 [2024-07-22 15:04:43.810991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.205 [2024-07-22 15:04:43.820704] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.205 [2024-07-22 15:04:43.820734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.205 [2024-07-22 15:04:43.820742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.205 [2024-07-22 15:04:43.830088] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.205 [2024-07-22 15:04:43.830120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.205 [2024-07-22 15:04:43.830143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.464 [2024-07-22 15:04:43.840373] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.464 [2024-07-22 15:04:43.840403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.464 [2024-07-22 15:04:43.840410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.464 [2024-07-22 15:04:43.850008] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.464 [2024-07-22 15:04:43.850040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.464 [2024-07-22 15:04:43.850062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.464 [2024-07-22 15:04:43.860900] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.464 [2024-07-22 15:04:43.860932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.464 [2024-07-22 15:04:43.860940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.464 [2024-07-22 15:04:43.871473] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.464 [2024-07-22 15:04:43.871503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.464 [2024-07-22 15:04:43.871526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.464 [2024-07-22 15:04:43.879927] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.464 [2024-07-22 15:04:43.879957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.464 [2024-07-22 15:04:43.879964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.464 [2024-07-22 15:04:43.890431] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.464 [2024-07-22 15:04:43.890462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.464 [2024-07-22 15:04:43.890470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.464 [2024-07-22 15:04:43.901081] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.472 [2024-07-22 15:04:43.901112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.472 [2024-07-22 15:04:43.901119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.472 [2024-07-22 15:04:43.910893] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.472 [2024-07-22 15:04:43.910930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.472 [2024-07-22 15:04:43.910953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.472 [2024-07-22 15:04:43.922220] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.472 [2024-07-22 15:04:43.922251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.472 [2024-07-22 15:04:43.922258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.472 [2024-07-22 15:04:43.932833] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.472 [2024-07-22 15:04:43.932866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.472 [2024-07-22 15:04:43.932873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.472 [2024-07-22 15:04:43.942821] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.472 [2024-07-22 15:04:43.942852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.472 [2024-07-22 15:04:43.942875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.472 [2024-07-22 15:04:43.951483] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.472 [2024-07-22 15:04:43.951514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.472 [2024-07-22 15:04:43.951537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.472 [2024-07-22 15:04:43.974051] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.472 [2024-07-22 15:04:43.974138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.472 [2024-07-22 15:04:43.974160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.472 [2024-07-22 15:04:43.989767] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.472 [2024-07-22 15:04:43.989815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.472 [2024-07-22 15:04:43.989828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.472 [2024-07-22 15:04:44.001292] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.472 [2024-07-22 15:04:44.001332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:18127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.472 [2024-07-22 15:04:44.001343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.472 [2024-07-22 15:04:44.011992] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.472 [2024-07-22 15:04:44.012025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.472 [2024-07-22 15:04:44.012034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.472 [2024-07-22 15:04:44.020715] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.472 [2024-07-22 15:04:44.020754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.472 [2024-07-22 15:04:44.020764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.472 [2024-07-22 15:04:44.033651] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.472 [2024-07-22 15:04:44.033694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.472 [2024-07-22 15:04:44.033704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.472 [2024-07-22 15:04:44.044893] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.472 [2024-07-22 15:04:44.044925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.472 [2024-07-22 15:04:44.044935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.472 [2024-07-22 15:04:44.056841] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.472 [2024-07-22 15:04:44.056874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.472 [2024-07-22 15:04:44.056883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.472 [2024-07-22 15:04:44.067527] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.472 [2024-07-22 15:04:44.067560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.472 [2024-07-22 15:04:44.067569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.472 [2024-07-22 15:04:44.079643] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.472 [2024-07-22 15:04:44.079686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.472 [2024-07-22 15:04:44.079695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.472 [2024-07-22 15:04:44.089246] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.472 [2024-07-22 15:04:44.089277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:2964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.472 [2024-07-22 15:04:44.089286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.730 [2024-07-22 15:04:44.100010] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.730 [2024-07-22 15:04:44.100040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.730 [2024-07-22 15:04:44.100048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.730 [2024-07-22 15:04:44.110746] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.730 [2024-07-22 15:04:44.110781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.730 [2024-07-22 15:04:44.110791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.730 [2024-07-22 15:04:44.120709] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.730 [2024-07-22 15:04:44.120745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.730 [2024-07-22 15:04:44.120754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.730 [2024-07-22 15:04:44.131703] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.730 [2024-07-22 15:04:44.131741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.730 [2024-07-22 15:04:44.131750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.730 [2024-07-22 15:04:44.141758] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.730 [2024-07-22 15:04:44.141797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.730 [2024-07-22 15:04:44.141806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.730 [2024-07-22 15:04:44.154523] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.730 [2024-07-22 15:04:44.154566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.730 [2024-07-22 15:04:44.154574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.730 [2024-07-22 15:04:44.163730] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.730 [2024-07-22 15:04:44.163765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.730 [2024-07-22 15:04:44.163773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.730 [2024-07-22 15:04:44.175761] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.730 [2024-07-22 15:04:44.175794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.730 [2024-07-22 15:04:44.175802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.730 [2024-07-22 15:04:44.187838] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe6ed40) 00:51:24.730 [2024-07-22 15:04:44.187871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:24.730 [2024-07-22 15:04:44.187878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:24.730 00:51:24.730 Latency(us) 00:51:24.730 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:51:24.730 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:51:24.730 nvme0n1 : 2.00 24516.01 95.77 0.00 0.00 5215.86 2575.65 27130.19 00:51:24.731 =================================================================================================================== 00:51:24.731 Total : 24516.01 95.77 0.00 0.00 5215.86 2575.65 27130.19 00:51:24.731 0 00:51:24.731 15:04:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:51:24.731 15:04:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:51:24.731 15:04:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:51:24.731 15:04:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:51:24.731 | .driver_specific 00:51:24.731 | .nvme_error 00:51:24.731 | .status_code 00:51:24.731 | .command_transient_transport_error' 00:51:24.989 15:04:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 192 > 0 )) 00:51:24.989 15:04:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 111384 00:51:24.989 15:04:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 111384 ']' 00:51:24.989 15:04:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 111384 00:51:24.989 15:04:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:51:24.989 15:04:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:51:24.989 15:04:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 111384 00:51:24.989 15:04:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:51:24.989 15:04:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:51:24.989 killing process with pid 111384 00:51:24.989 15:04:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 111384' 00:51:24.989 15:04:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 111384 00:51:24.989 Received shutdown signal, test time was about 2.000000 seconds 00:51:24.989 00:51:24.989 Latency(us) 00:51:24.989 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:51:24.989 =================================================================================================================== 00:51:24.990 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:51:24.990 15:04:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 111384 00:51:25.248 15:04:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:51:25.248 15:04:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:51:25.248 15:04:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:51:25.248 15:04:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:51:25.248 15:04:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:51:25.248 15:04:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=111475 00:51:25.248 15:04:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 111475 /var/tmp/bperf.sock 00:51:25.248 15:04:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:51:25.248 15:04:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 111475 ']' 00:51:25.248 15:04:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:51:25.248 15:04:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:51:25.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:51:25.248 15:04:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:51:25.248 15:04:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:51:25.248 15:04:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:51:25.248 I/O size of 131072 is greater than zero copy threshold (65536). 00:51:25.248 Zero copy mechanism will not be used. 00:51:25.248 [2024-07-22 15:04:44.819005] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:51:25.248 [2024-07-22 15:04:44.819076] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111475 ] 00:51:25.507 [2024-07-22 15:04:44.958631] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:25.507 [2024-07-22 15:04:45.041424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:51:26.074 15:04:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:51:26.074 15:04:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:51:26.074 15:04:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:51:26.074 15:04:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:51:26.331 15:04:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:51:26.331 15:04:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:26.331 15:04:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:51:26.331 15:04:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:26.331 15:04:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:51:26.331 15:04:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:51:26.588 nvme0n1 00:51:26.588 15:04:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:51:26.588 15:04:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:26.588 15:04:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:51:26.588 15:04:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:26.588 15:04:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:51:26.588 15:04:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:51:26.847 I/O size of 131072 is greater than zero copy threshold (65536). 00:51:26.847 Zero copy mechanism will not be used. 00:51:26.847 Running I/O for 2 seconds... 00:51:26.847 [2024-07-22 15:04:46.237474] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.847 [2024-07-22 15:04:46.237536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.847 [2024-07-22 15:04:46.237548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:26.847 [2024-07-22 15:04:46.242429] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.847 [2024-07-22 15:04:46.242479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.847 [2024-07-22 15:04:46.242488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:26.847 [2024-07-22 15:04:46.247096] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.847 [2024-07-22 15:04:46.247140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.847 [2024-07-22 15:04:46.247150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:26.847 [2024-07-22 15:04:46.251740] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.847 [2024-07-22 15:04:46.251775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.847 [2024-07-22 15:04:46.251783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:26.847 [2024-07-22 15:04:46.254949] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.847 [2024-07-22 15:04:46.254981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.847 [2024-07-22 15:04:46.254989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:26.847 [2024-07-22 15:04:46.259625] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.847 [2024-07-22 15:04:46.259657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.848 [2024-07-22 15:04:46.259665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:26.848 [2024-07-22 15:04:46.262970] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.848 [2024-07-22 15:04:46.263003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.848 [2024-07-22 15:04:46.263011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:26.848 [2024-07-22 15:04:46.266887] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.848 [2024-07-22 15:04:46.266921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.848 [2024-07-22 15:04:46.266929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:26.848 [2024-07-22 15:04:46.271257] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.848 [2024-07-22 15:04:46.271290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.848 [2024-07-22 15:04:46.271297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:26.848 [2024-07-22 15:04:46.276113] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.848 [2024-07-22 15:04:46.276146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.848 [2024-07-22 15:04:46.276154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:26.848 [2024-07-22 15:04:46.279653] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.848 [2024-07-22 15:04:46.279693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.848 [2024-07-22 15:04:46.279702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:26.848 [2024-07-22 15:04:46.284244] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.848 [2024-07-22 15:04:46.284280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.848 [2024-07-22 15:04:46.284289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:26.848 [2024-07-22 15:04:46.289049] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.848 [2024-07-22 15:04:46.289084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.848 [2024-07-22 15:04:46.289093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:26.848 [2024-07-22 15:04:46.292082] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.848 [2024-07-22 15:04:46.292111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.848 [2024-07-22 15:04:46.292118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:26.848 [2024-07-22 15:04:46.297139] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.848 [2024-07-22 15:04:46.297172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.848 [2024-07-22 15:04:46.297181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:26.848 [2024-07-22 15:04:46.300693] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.848 [2024-07-22 15:04:46.300721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.848 [2024-07-22 15:04:46.300729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:26.848 [2024-07-22 15:04:46.305049] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.848 [2024-07-22 15:04:46.305081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.848 [2024-07-22 15:04:46.305089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:26.848 [2024-07-22 15:04:46.309330] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.848 [2024-07-22 15:04:46.309365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.848 [2024-07-22 15:04:46.309374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:26.848 [2024-07-22 15:04:46.314127] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.848 [2024-07-22 15:04:46.314154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.848 [2024-07-22 15:04:46.314161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:26.848 [2024-07-22 15:04:46.317608] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.848 [2024-07-22 15:04:46.317639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.848 [2024-07-22 15:04:46.317647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:26.848 [2024-07-22 15:04:46.322344] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.848 [2024-07-22 15:04:46.322375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.848 [2024-07-22 15:04:46.322382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:26.848 [2024-07-22 15:04:46.325633] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.848 [2024-07-22 15:04:46.325662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.848 [2024-07-22 15:04:46.325682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:26.848 [2024-07-22 15:04:46.329567] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.848 [2024-07-22 15:04:46.329598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.848 [2024-07-22 15:04:46.329606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:26.848 [2024-07-22 15:04:46.334567] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.848 [2024-07-22 15:04:46.334596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.848 [2024-07-22 15:04:46.334604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:26.848 [2024-07-22 15:04:46.338060] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.848 [2024-07-22 15:04:46.338088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.848 [2024-07-22 15:04:46.338096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:26.848 [2024-07-22 15:04:46.342179] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.848 [2024-07-22 15:04:46.342209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.848 [2024-07-22 15:04:46.342216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:26.848 [2024-07-22 15:04:46.346753] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.848 [2024-07-22 15:04:46.346782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.848 [2024-07-22 15:04:46.346789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:26.848 [2024-07-22 15:04:46.349864] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.848 [2024-07-22 15:04:46.349893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.848 [2024-07-22 15:04:46.349901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:26.848 [2024-07-22 15:04:46.354430] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.848 [2024-07-22 15:04:46.354464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.848 [2024-07-22 15:04:46.354472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:26.848 [2024-07-22 15:04:46.359302] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.848 [2024-07-22 15:04:46.359338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.848 [2024-07-22 15:04:46.359346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:26.848 [2024-07-22 15:04:46.362459] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.848 [2024-07-22 15:04:46.362492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.848 [2024-07-22 15:04:46.362499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:26.848 [2024-07-22 15:04:46.366613] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.848 [2024-07-22 15:04:46.366642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.848 [2024-07-22 15:04:46.366649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:26.848 [2024-07-22 15:04:46.371698] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.848 [2024-07-22 15:04:46.371727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.849 [2024-07-22 15:04:46.371734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:26.849 [2024-07-22 15:04:46.374862] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.849 [2024-07-22 15:04:46.374890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.849 [2024-07-22 15:04:46.374897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:26.849 [2024-07-22 15:04:46.379327] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.849 [2024-07-22 15:04:46.379355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.849 [2024-07-22 15:04:46.379363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:26.849 [2024-07-22 15:04:46.383002] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.849 [2024-07-22 15:04:46.383033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.849 [2024-07-22 15:04:46.383041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:26.849 [2024-07-22 15:04:46.386632] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.849 [2024-07-22 15:04:46.386661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.849 [2024-07-22 15:04:46.386677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:26.849 [2024-07-22 15:04:46.390731] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.849 [2024-07-22 15:04:46.390759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.849 [2024-07-22 15:04:46.390766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:26.849 [2024-07-22 15:04:46.395298] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.849 [2024-07-22 15:04:46.395329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.849 [2024-07-22 15:04:46.395336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:26.849 [2024-07-22 15:04:46.398873] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.849 [2024-07-22 15:04:46.398902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.849 [2024-07-22 15:04:46.398909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:26.849 [2024-07-22 15:04:46.402872] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.849 [2024-07-22 15:04:46.402901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.849 [2024-07-22 15:04:46.402909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:26.849 [2024-07-22 15:04:46.407634] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.849 [2024-07-22 15:04:46.407677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.849 [2024-07-22 15:04:46.407685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:26.849 [2024-07-22 15:04:46.411375] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.849 [2024-07-22 15:04:46.411406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.849 [2024-07-22 15:04:46.411413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:26.849 [2024-07-22 15:04:46.414921] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.849 [2024-07-22 15:04:46.414953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.849 [2024-07-22 15:04:46.414960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:26.849 [2024-07-22 15:04:46.419392] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.849 [2024-07-22 15:04:46.419425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.849 [2024-07-22 15:04:46.419434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:26.849 [2024-07-22 15:04:46.424212] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.849 [2024-07-22 15:04:46.424249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.849 [2024-07-22 15:04:46.424257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:26.849 [2024-07-22 15:04:46.428890] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.849 [2024-07-22 15:04:46.428927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.849 [2024-07-22 15:04:46.428936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:26.849 [2024-07-22 15:04:46.432031] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.849 [2024-07-22 15:04:46.432063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.849 [2024-07-22 15:04:46.432070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:26.849 [2024-07-22 15:04:46.436774] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.849 [2024-07-22 15:04:46.436807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.849 [2024-07-22 15:04:46.436815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:26.849 [2024-07-22 15:04:46.441301] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.849 [2024-07-22 15:04:46.441341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.849 [2024-07-22 15:04:46.441350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:26.849 [2024-07-22 15:04:46.444839] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.849 [2024-07-22 15:04:46.444874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.849 [2024-07-22 15:04:46.444882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:26.849 [2024-07-22 15:04:46.449806] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.849 [2024-07-22 15:04:46.449836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.849 [2024-07-22 15:04:46.449844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:26.849 [2024-07-22 15:04:46.453451] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.849 [2024-07-22 15:04:46.453482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.849 [2024-07-22 15:04:46.453490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:26.849 [2024-07-22 15:04:46.457715] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.849 [2024-07-22 15:04:46.457743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.849 [2024-07-22 15:04:46.457751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:26.849 [2024-07-22 15:04:46.460931] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.849 [2024-07-22 15:04:46.460961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.849 [2024-07-22 15:04:46.460970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:26.849 [2024-07-22 15:04:46.465485] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.849 [2024-07-22 15:04:46.465516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.849 [2024-07-22 15:04:46.465525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:26.849 [2024-07-22 15:04:46.468742] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.849 [2024-07-22 15:04:46.468771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.849 [2024-07-22 15:04:46.468779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:26.849 [2024-07-22 15:04:46.472784] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:26.849 [2024-07-22 15:04:46.472815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:26.849 [2024-07-22 15:04:46.472823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.109 [2024-07-22 15:04:46.477778] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.109 [2024-07-22 15:04:46.477815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.109 [2024-07-22 15:04:46.477825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.109 [2024-07-22 15:04:46.482289] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.109 [2024-07-22 15:04:46.482320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.109 [2024-07-22 15:04:46.482328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.109 [2024-07-22 15:04:46.485447] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.109 [2024-07-22 15:04:46.485481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.109 [2024-07-22 15:04:46.485490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.109 [2024-07-22 15:04:46.490263] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.110 [2024-07-22 15:04:46.490293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.110 [2024-07-22 15:04:46.490301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.110 [2024-07-22 15:04:46.494440] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.110 [2024-07-22 15:04:46.494469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.110 [2024-07-22 15:04:46.494476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.110 [2024-07-22 15:04:46.497576] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.110 [2024-07-22 15:04:46.497608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.110 [2024-07-22 15:04:46.497616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.110 [2024-07-22 15:04:46.501680] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.110 [2024-07-22 15:04:46.501710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.110 [2024-07-22 15:04:46.501718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.110 [2024-07-22 15:04:46.506487] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.110 [2024-07-22 15:04:46.506515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.110 [2024-07-22 15:04:46.506523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.110 [2024-07-22 15:04:46.510899] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.110 [2024-07-22 15:04:46.510927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.110 [2024-07-22 15:04:46.510936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.110 [2024-07-22 15:04:46.514369] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.110 [2024-07-22 15:04:46.514397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.110 [2024-07-22 15:04:46.514405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.110 [2024-07-22 15:04:46.518812] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.110 [2024-07-22 15:04:46.518840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.110 [2024-07-22 15:04:46.518848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.110 [2024-07-22 15:04:46.522476] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.110 [2024-07-22 15:04:46.522505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.110 [2024-07-22 15:04:46.522512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.110 [2024-07-22 15:04:46.526861] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.110 [2024-07-22 15:04:46.526889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.110 [2024-07-22 15:04:46.526897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.110 [2024-07-22 15:04:46.530651] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.110 [2024-07-22 15:04:46.530695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.110 [2024-07-22 15:04:46.530705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.110 [2024-07-22 15:04:46.533976] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.110 [2024-07-22 15:04:46.534008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.110 [2024-07-22 15:04:46.534027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.110 [2024-07-22 15:04:46.538261] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.110 [2024-07-22 15:04:46.538296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.110 [2024-07-22 15:04:46.538305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.110 [2024-07-22 15:04:46.541630] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.110 [2024-07-22 15:04:46.541686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.110 [2024-07-22 15:04:46.541703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.110 [2024-07-22 15:04:46.546056] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.110 [2024-07-22 15:04:46.546091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.110 [2024-07-22 15:04:46.546100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.110 [2024-07-22 15:04:46.551024] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.110 [2024-07-22 15:04:46.551055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.110 [2024-07-22 15:04:46.551063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.110 [2024-07-22 15:04:46.554426] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.110 [2024-07-22 15:04:46.554456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.110 [2024-07-22 15:04:46.554465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.110 [2024-07-22 15:04:46.558309] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.110 [2024-07-22 15:04:46.558342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.110 [2024-07-22 15:04:46.558351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.110 [2024-07-22 15:04:46.562632] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.110 [2024-07-22 15:04:46.562661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.110 [2024-07-22 15:04:46.562690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.110 [2024-07-22 15:04:46.566387] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.110 [2024-07-22 15:04:46.566418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.110 [2024-07-22 15:04:46.566425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.110 [2024-07-22 15:04:46.570525] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.110 [2024-07-22 15:04:46.570555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.110 [2024-07-22 15:04:46.570563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.110 [2024-07-22 15:04:46.575495] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.110 [2024-07-22 15:04:46.575526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.110 [2024-07-22 15:04:46.575534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.110 [2024-07-22 15:04:46.578665] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.110 [2024-07-22 15:04:46.578703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.110 [2024-07-22 15:04:46.578710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.110 [2024-07-22 15:04:46.583281] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.110 [2024-07-22 15:04:46.583312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.110 [2024-07-22 15:04:46.583319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.110 [2024-07-22 15:04:46.587096] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.110 [2024-07-22 15:04:46.587125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.110 [2024-07-22 15:04:46.587132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.110 [2024-07-22 15:04:46.591153] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.110 [2024-07-22 15:04:46.591183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.110 [2024-07-22 15:04:46.591190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.110 [2024-07-22 15:04:46.595589] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.110 [2024-07-22 15:04:46.595618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.111 [2024-07-22 15:04:46.595626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.111 [2024-07-22 15:04:46.599202] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.111 [2024-07-22 15:04:46.599231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.111 [2024-07-22 15:04:46.599238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.111 [2024-07-22 15:04:46.603978] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.111 [2024-07-22 15:04:46.604007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.111 [2024-07-22 15:04:46.604014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.111 [2024-07-22 15:04:46.608327] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.111 [2024-07-22 15:04:46.608355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.111 [2024-07-22 15:04:46.608363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.111 [2024-07-22 15:04:46.611244] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.111 [2024-07-22 15:04:46.611274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.111 [2024-07-22 15:04:46.611281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.111 [2024-07-22 15:04:46.615844] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.111 [2024-07-22 15:04:46.615875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.111 [2024-07-22 15:04:46.615882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.111 [2024-07-22 15:04:46.619128] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.111 [2024-07-22 15:04:46.619156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.111 [2024-07-22 15:04:46.619164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.111 [2024-07-22 15:04:46.622979] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.111 [2024-07-22 15:04:46.623009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.111 [2024-07-22 15:04:46.623016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.111 [2024-07-22 15:04:46.627622] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.111 [2024-07-22 15:04:46.627653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.111 [2024-07-22 15:04:46.627662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.111 [2024-07-22 15:04:46.631275] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.111 [2024-07-22 15:04:46.631303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.111 [2024-07-22 15:04:46.631310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.111 [2024-07-22 15:04:46.635623] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.111 [2024-07-22 15:04:46.635652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.111 [2024-07-22 15:04:46.635660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.111 [2024-07-22 15:04:46.639810] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.111 [2024-07-22 15:04:46.639840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.111 [2024-07-22 15:04:46.639847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.111 [2024-07-22 15:04:46.643759] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.111 [2024-07-22 15:04:46.643789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.111 [2024-07-22 15:04:46.643797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.111 [2024-07-22 15:04:46.647300] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.111 [2024-07-22 15:04:46.647331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.111 [2024-07-22 15:04:46.647338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.111 [2024-07-22 15:04:46.650900] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.111 [2024-07-22 15:04:46.650930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.111 [2024-07-22 15:04:46.650938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.111 [2024-07-22 15:04:46.654757] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.111 [2024-07-22 15:04:46.654784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.111 [2024-07-22 15:04:46.654792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.111 [2024-07-22 15:04:46.658506] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.111 [2024-07-22 15:04:46.658535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.111 [2024-07-22 15:04:46.658543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.111 [2024-07-22 15:04:46.662321] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.111 [2024-07-22 15:04:46.662350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.111 [2024-07-22 15:04:46.662358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.111 [2024-07-22 15:04:46.666338] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.111 [2024-07-22 15:04:46.666367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.111 [2024-07-22 15:04:46.666375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.111 [2024-07-22 15:04:46.670164] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.111 [2024-07-22 15:04:46.670193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.111 [2024-07-22 15:04:46.670202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.111 [2024-07-22 15:04:46.674611] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.111 [2024-07-22 15:04:46.674640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.111 [2024-07-22 15:04:46.674647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.111 [2024-07-22 15:04:46.677840] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.111 [2024-07-22 15:04:46.677870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.111 [2024-07-22 15:04:46.677877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.111 [2024-07-22 15:04:46.682724] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.111 [2024-07-22 15:04:46.682750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.111 [2024-07-22 15:04:46.682757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.111 [2024-07-22 15:04:46.687319] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.111 [2024-07-22 15:04:46.687350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.111 [2024-07-22 15:04:46.687358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.111 [2024-07-22 15:04:46.690483] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.111 [2024-07-22 15:04:46.690513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.111 [2024-07-22 15:04:46.690520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.111 [2024-07-22 15:04:46.694648] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.111 [2024-07-22 15:04:46.694690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.111 [2024-07-22 15:04:46.694699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.111 [2024-07-22 15:04:46.698541] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.112 [2024-07-22 15:04:46.698576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.112 [2024-07-22 15:04:46.698584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.112 [2024-07-22 15:04:46.702412] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.112 [2024-07-22 15:04:46.702445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.112 [2024-07-22 15:04:46.702454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.112 [2024-07-22 15:04:46.706764] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.112 [2024-07-22 15:04:46.706794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.112 [2024-07-22 15:04:46.706802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.112 [2024-07-22 15:04:46.710329] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.112 [2024-07-22 15:04:46.710360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.112 [2024-07-22 15:04:46.710368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.112 [2024-07-22 15:04:46.714291] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.112 [2024-07-22 15:04:46.714325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.112 [2024-07-22 15:04:46.714332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.112 [2024-07-22 15:04:46.718846] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.112 [2024-07-22 15:04:46.718878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.112 [2024-07-22 15:04:46.718886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.112 [2024-07-22 15:04:46.722839] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.112 [2024-07-22 15:04:46.722869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.112 [2024-07-22 15:04:46.722876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.112 [2024-07-22 15:04:46.726645] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.112 [2024-07-22 15:04:46.726699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.112 [2024-07-22 15:04:46.726707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.112 [2024-07-22 15:04:46.731265] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.112 [2024-07-22 15:04:46.731293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.112 [2024-07-22 15:04:46.731301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.112 [2024-07-22 15:04:46.734995] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.112 [2024-07-22 15:04:46.735025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.112 [2024-07-22 15:04:46.735032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.372 [2024-07-22 15:04:46.738690] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.372 [2024-07-22 15:04:46.738736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.372 [2024-07-22 15:04:46.738746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.372 [2024-07-22 15:04:46.743234] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.372 [2024-07-22 15:04:46.743267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.372 [2024-07-22 15:04:46.743274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.372 [2024-07-22 15:04:46.747776] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.372 [2024-07-22 15:04:46.747807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.372 [2024-07-22 15:04:46.747815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.372 [2024-07-22 15:04:46.750894] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.372 [2024-07-22 15:04:46.750922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.372 [2024-07-22 15:04:46.750930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.372 [2024-07-22 15:04:46.755735] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.372 [2024-07-22 15:04:46.755764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.372 [2024-07-22 15:04:46.755771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.372 [2024-07-22 15:04:46.760184] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.372 [2024-07-22 15:04:46.760214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.372 [2024-07-22 15:04:46.760221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.372 [2024-07-22 15:04:46.763513] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.372 [2024-07-22 15:04:46.763541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.372 [2024-07-22 15:04:46.763548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.372 [2024-07-22 15:04:46.767632] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.372 [2024-07-22 15:04:46.767661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.372 [2024-07-22 15:04:46.767682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.372 [2024-07-22 15:04:46.771618] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.372 [2024-07-22 15:04:46.771647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.372 [2024-07-22 15:04:46.771655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.372 [2024-07-22 15:04:46.775992] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.372 [2024-07-22 15:04:46.776023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.372 [2024-07-22 15:04:46.776031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.372 [2024-07-22 15:04:46.779877] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.372 [2024-07-22 15:04:46.779908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.372 [2024-07-22 15:04:46.779916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.372 [2024-07-22 15:04:46.783830] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.372 [2024-07-22 15:04:46.783871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.372 [2024-07-22 15:04:46.783879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.372 [2024-07-22 15:04:46.787471] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.372 [2024-07-22 15:04:46.787530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.372 [2024-07-22 15:04:46.787538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.372 [2024-07-22 15:04:46.791337] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.372 [2024-07-22 15:04:46.791367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.372 [2024-07-22 15:04:46.791375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.372 [2024-07-22 15:04:46.795026] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.372 [2024-07-22 15:04:46.795060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.372 [2024-07-22 15:04:46.795068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.372 [2024-07-22 15:04:46.799543] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.372 [2024-07-22 15:04:46.799592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.372 [2024-07-22 15:04:46.799604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.372 [2024-07-22 15:04:46.804119] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.372 [2024-07-22 15:04:46.804161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.372 [2024-07-22 15:04:46.804170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.372 [2024-07-22 15:04:46.807815] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.372 [2024-07-22 15:04:46.807849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.372 [2024-07-22 15:04:46.807857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.372 [2024-07-22 15:04:46.811109] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.372 [2024-07-22 15:04:46.811138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.372 [2024-07-22 15:04:46.811145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.372 [2024-07-22 15:04:46.815927] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.372 [2024-07-22 15:04:46.815957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.372 [2024-07-22 15:04:46.815965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.372 [2024-07-22 15:04:46.820398] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.372 [2024-07-22 15:04:46.820427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.372 [2024-07-22 15:04:46.820436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.372 [2024-07-22 15:04:46.823552] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.372 [2024-07-22 15:04:46.823581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.372 [2024-07-22 15:04:46.823589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.372 [2024-07-22 15:04:46.828047] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.372 [2024-07-22 15:04:46.828078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.372 [2024-07-22 15:04:46.828085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.372 [2024-07-22 15:04:46.832385] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.372 [2024-07-22 15:04:46.832413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.372 [2024-07-22 15:04:46.832420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.372 [2024-07-22 15:04:46.836676] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.372 [2024-07-22 15:04:46.836714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.373 [2024-07-22 15:04:46.836723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.373 [2024-07-22 15:04:46.840058] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.373 [2024-07-22 15:04:46.840086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.373 [2024-07-22 15:04:46.840094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.373 [2024-07-22 15:04:46.844059] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.373 [2024-07-22 15:04:46.844090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.373 [2024-07-22 15:04:46.844097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.373 [2024-07-22 15:04:46.848479] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.373 [2024-07-22 15:04:46.848508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.373 [2024-07-22 15:04:46.848515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.373 [2024-07-22 15:04:46.851710] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.373 [2024-07-22 15:04:46.851737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.373 [2024-07-22 15:04:46.851744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.373 [2024-07-22 15:04:46.856200] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.373 [2024-07-22 15:04:46.856229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.373 [2024-07-22 15:04:46.856237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.373 [2024-07-22 15:04:46.860982] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.373 [2024-07-22 15:04:46.861013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.373 [2024-07-22 15:04:46.861020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.373 [2024-07-22 15:04:46.864187] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.373 [2024-07-22 15:04:46.864214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.373 [2024-07-22 15:04:46.864222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.373 [2024-07-22 15:04:46.868153] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.373 [2024-07-22 15:04:46.868181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.373 [2024-07-22 15:04:46.868188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.373 [2024-07-22 15:04:46.872761] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.373 [2024-07-22 15:04:46.872789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.373 [2024-07-22 15:04:46.872796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.373 [2024-07-22 15:04:46.876899] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.373 [2024-07-22 15:04:46.876930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.373 [2024-07-22 15:04:46.876937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.373 [2024-07-22 15:04:46.880025] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.373 [2024-07-22 15:04:46.880055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.373 [2024-07-22 15:04:46.880062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.373 [2024-07-22 15:04:46.883717] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.373 [2024-07-22 15:04:46.883750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.373 [2024-07-22 15:04:46.883757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.373 [2024-07-22 15:04:46.888387] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.373 [2024-07-22 15:04:46.888421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.373 [2024-07-22 15:04:46.888429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.373 [2024-07-22 15:04:46.892443] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.373 [2024-07-22 15:04:46.892478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.373 [2024-07-22 15:04:46.892485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.373 [2024-07-22 15:04:46.896884] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.373 [2024-07-22 15:04:46.896916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.373 [2024-07-22 15:04:46.896924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.373 [2024-07-22 15:04:46.902203] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.373 [2024-07-22 15:04:46.902235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.373 [2024-07-22 15:04:46.902243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.373 [2024-07-22 15:04:46.905604] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.373 [2024-07-22 15:04:46.905633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.373 [2024-07-22 15:04:46.905641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.373 [2024-07-22 15:04:46.910301] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.373 [2024-07-22 15:04:46.910329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.373 [2024-07-22 15:04:46.910337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.373 [2024-07-22 15:04:46.914204] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.373 [2024-07-22 15:04:46.914233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.373 [2024-07-22 15:04:46.914240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.373 [2024-07-22 15:04:46.918423] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.373 [2024-07-22 15:04:46.918453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.373 [2024-07-22 15:04:46.918461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.373 [2024-07-22 15:04:46.923549] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.373 [2024-07-22 15:04:46.923578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.373 [2024-07-22 15:04:46.923586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.373 [2024-07-22 15:04:46.928906] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.373 [2024-07-22 15:04:46.928933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.373 [2024-07-22 15:04:46.928940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.373 [2024-07-22 15:04:46.934743] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.373 [2024-07-22 15:04:46.934771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.373 [2024-07-22 15:04:46.934779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.373 [2024-07-22 15:04:46.938360] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.373 [2024-07-22 15:04:46.938402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.373 [2024-07-22 15:04:46.938409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.373 [2024-07-22 15:04:46.942852] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.373 [2024-07-22 15:04:46.942881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.373 [2024-07-22 15:04:46.942889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.373 [2024-07-22 15:04:46.946506] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.373 [2024-07-22 15:04:46.946534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.374 [2024-07-22 15:04:46.946541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.374 [2024-07-22 15:04:46.950645] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.374 [2024-07-22 15:04:46.950688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.374 [2024-07-22 15:04:46.950696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.374 [2024-07-22 15:04:46.955856] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.374 [2024-07-22 15:04:46.955884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.374 [2024-07-22 15:04:46.955892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.374 [2024-07-22 15:04:46.959771] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.374 [2024-07-22 15:04:46.959799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.374 [2024-07-22 15:04:46.959807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.374 [2024-07-22 15:04:46.964187] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.374 [2024-07-22 15:04:46.964216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.374 [2024-07-22 15:04:46.964224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.374 [2024-07-22 15:04:46.967430] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.374 [2024-07-22 15:04:46.967459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.374 [2024-07-22 15:04:46.967467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.374 [2024-07-22 15:04:46.971414] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.374 [2024-07-22 15:04:46.971445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.374 [2024-07-22 15:04:46.971454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.374 [2024-07-22 15:04:46.976161] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.374 [2024-07-22 15:04:46.976196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.374 [2024-07-22 15:04:46.976204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.374 [2024-07-22 15:04:46.979711] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.374 [2024-07-22 15:04:46.979760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.374 [2024-07-22 15:04:46.979773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.374 [2024-07-22 15:04:46.983775] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.374 [2024-07-22 15:04:46.983806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.374 [2024-07-22 15:04:46.983813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.374 [2024-07-22 15:04:46.988489] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.374 [2024-07-22 15:04:46.988518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.374 [2024-07-22 15:04:46.988526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.374 [2024-07-22 15:04:46.992216] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.374 [2024-07-22 15:04:46.992245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.374 [2024-07-22 15:04:46.992252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.374 [2024-07-22 15:04:46.996055] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.374 [2024-07-22 15:04:46.996084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.374 [2024-07-22 15:04:46.996091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.374 [2024-07-22 15:04:47.000178] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.374 [2024-07-22 15:04:47.000220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.374 [2024-07-22 15:04:47.000229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.634 [2024-07-22 15:04:47.005113] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.634 [2024-07-22 15:04:47.005146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.634 [2024-07-22 15:04:47.005154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.634 [2024-07-22 15:04:47.007985] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.634 [2024-07-22 15:04:47.008026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.634 [2024-07-22 15:04:47.008039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.634 [2024-07-22 15:04:47.012577] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.634 [2024-07-22 15:04:47.012618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.634 [2024-07-22 15:04:47.012627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.634 [2024-07-22 15:04:47.017218] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.634 [2024-07-22 15:04:47.017249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.634 [2024-07-22 15:04:47.017257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.634 [2024-07-22 15:04:47.021130] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.634 [2024-07-22 15:04:47.021160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.634 [2024-07-22 15:04:47.021167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.634 [2024-07-22 15:04:47.024435] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.634 [2024-07-22 15:04:47.024463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.634 [2024-07-22 15:04:47.024470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.634 [2024-07-22 15:04:47.029265] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.634 [2024-07-22 15:04:47.029299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.634 [2024-07-22 15:04:47.029307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.634 [2024-07-22 15:04:47.032731] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.634 [2024-07-22 15:04:47.032760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.634 [2024-07-22 15:04:47.032768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.634 [2024-07-22 15:04:47.036817] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.634 [2024-07-22 15:04:47.036847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.634 [2024-07-22 15:04:47.036854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.634 [2024-07-22 15:04:47.040639] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.634 [2024-07-22 15:04:47.040677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.634 [2024-07-22 15:04:47.040686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.634 [2024-07-22 15:04:47.044840] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.634 [2024-07-22 15:04:47.044869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.634 [2024-07-22 15:04:47.044877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.634 [2024-07-22 15:04:47.048713] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.634 [2024-07-22 15:04:47.048748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.634 [2024-07-22 15:04:47.048756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.634 [2024-07-22 15:04:47.052464] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.634 [2024-07-22 15:04:47.052499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.635 [2024-07-22 15:04:47.052507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.635 [2024-07-22 15:04:47.057248] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.635 [2024-07-22 15:04:47.057290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.635 [2024-07-22 15:04:47.057299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.635 [2024-07-22 15:04:47.060817] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.635 [2024-07-22 15:04:47.060855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.635 [2024-07-22 15:04:47.060865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.635 [2024-07-22 15:04:47.064913] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.635 [2024-07-22 15:04:47.064945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.635 [2024-07-22 15:04:47.064953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.635 [2024-07-22 15:04:47.069593] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.635 [2024-07-22 15:04:47.069628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.635 [2024-07-22 15:04:47.069638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.635 [2024-07-22 15:04:47.073016] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.635 [2024-07-22 15:04:47.073048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.635 [2024-07-22 15:04:47.073056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.635 [2024-07-22 15:04:47.077188] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.635 [2024-07-22 15:04:47.077219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.635 [2024-07-22 15:04:47.077226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.635 [2024-07-22 15:04:47.080654] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.635 [2024-07-22 15:04:47.080693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.635 [2024-07-22 15:04:47.080700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.635 [2024-07-22 15:04:47.084195] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.635 [2024-07-22 15:04:47.084224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.635 [2024-07-22 15:04:47.084231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.635 [2024-07-22 15:04:47.088473] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.635 [2024-07-22 15:04:47.088502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.635 [2024-07-22 15:04:47.088509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.635 [2024-07-22 15:04:47.093079] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.635 [2024-07-22 15:04:47.093109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.635 [2024-07-22 15:04:47.093117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.635 [2024-07-22 15:04:47.096050] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.635 [2024-07-22 15:04:47.096077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.635 [2024-07-22 15:04:47.096085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.635 [2024-07-22 15:04:47.100765] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.635 [2024-07-22 15:04:47.100793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.635 [2024-07-22 15:04:47.100801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.635 [2024-07-22 15:04:47.105235] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.635 [2024-07-22 15:04:47.105264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.635 [2024-07-22 15:04:47.105272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.635 [2024-07-22 15:04:47.109138] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.635 [2024-07-22 15:04:47.109168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.635 [2024-07-22 15:04:47.109176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.635 [2024-07-22 15:04:47.112610] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.635 [2024-07-22 15:04:47.112649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.635 [2024-07-22 15:04:47.112658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.635 [2024-07-22 15:04:47.116958] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.635 [2024-07-22 15:04:47.116999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.635 [2024-07-22 15:04:47.117007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.635 [2024-07-22 15:04:47.120440] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.635 [2024-07-22 15:04:47.120474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.635 [2024-07-22 15:04:47.120482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.635 [2024-07-22 15:04:47.125188] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.635 [2024-07-22 15:04:47.125224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.635 [2024-07-22 15:04:47.125232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.635 [2024-07-22 15:04:47.130106] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.635 [2024-07-22 15:04:47.130141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.635 [2024-07-22 15:04:47.130148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.635 [2024-07-22 15:04:47.133441] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.635 [2024-07-22 15:04:47.133473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.635 [2024-07-22 15:04:47.133481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.635 [2024-07-22 15:04:47.138440] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.635 [2024-07-22 15:04:47.138469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.635 [2024-07-22 15:04:47.138477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.635 [2024-07-22 15:04:47.143068] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.635 [2024-07-22 15:04:47.143097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.635 [2024-07-22 15:04:47.143104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.635 [2024-07-22 15:04:47.146409] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.635 [2024-07-22 15:04:47.146438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.635 [2024-07-22 15:04:47.146446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.635 [2024-07-22 15:04:47.151013] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.635 [2024-07-22 15:04:47.151042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.636 [2024-07-22 15:04:47.151049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.636 [2024-07-22 15:04:47.155708] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.636 [2024-07-22 15:04:47.155736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.636 [2024-07-22 15:04:47.155743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.636 [2024-07-22 15:04:47.158855] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.636 [2024-07-22 15:04:47.158882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.636 [2024-07-22 15:04:47.158889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.636 [2024-07-22 15:04:47.163735] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.636 [2024-07-22 15:04:47.163764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.636 [2024-07-22 15:04:47.163771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.636 [2024-07-22 15:04:47.167184] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.636 [2024-07-22 15:04:47.167213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.636 [2024-07-22 15:04:47.167220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.636 [2024-07-22 15:04:47.171262] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.636 [2024-07-22 15:04:47.171291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.636 [2024-07-22 15:04:47.171298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.636 [2024-07-22 15:04:47.174987] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.636 [2024-07-22 15:04:47.175014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.636 [2024-07-22 15:04:47.175021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.636 [2024-07-22 15:04:47.177935] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.636 [2024-07-22 15:04:47.177964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.636 [2024-07-22 15:04:47.177972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.636 [2024-07-22 15:04:47.182388] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.636 [2024-07-22 15:04:47.182415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.636 [2024-07-22 15:04:47.182423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.636 [2024-07-22 15:04:47.186295] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.636 [2024-07-22 15:04:47.186324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.636 [2024-07-22 15:04:47.186331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.636 [2024-07-22 15:04:47.190660] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.636 [2024-07-22 15:04:47.190696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.636 [2024-07-22 15:04:47.190705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.636 [2024-07-22 15:04:47.194486] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.636 [2024-07-22 15:04:47.194517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.636 [2024-07-22 15:04:47.194524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.636 [2024-07-22 15:04:47.197742] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.636 [2024-07-22 15:04:47.197770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.636 [2024-07-22 15:04:47.197778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.636 [2024-07-22 15:04:47.201840] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.636 [2024-07-22 15:04:47.201870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.636 [2024-07-22 15:04:47.201878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.636 [2024-07-22 15:04:47.206555] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.636 [2024-07-22 15:04:47.206586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.636 [2024-07-22 15:04:47.206593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.636 [2024-07-22 15:04:47.210352] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.636 [2024-07-22 15:04:47.210381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.636 [2024-07-22 15:04:47.210389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.636 [2024-07-22 15:04:47.214269] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.636 [2024-07-22 15:04:47.214300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.636 [2024-07-22 15:04:47.214307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.636 [2024-07-22 15:04:47.218945] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.636 [2024-07-22 15:04:47.218972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.636 [2024-07-22 15:04:47.218979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.636 [2024-07-22 15:04:47.221830] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.636 [2024-07-22 15:04:47.221858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.636 [2024-07-22 15:04:47.221865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.636 [2024-07-22 15:04:47.227221] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.636 [2024-07-22 15:04:47.227249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.636 [2024-07-22 15:04:47.227257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.636 [2024-07-22 15:04:47.231323] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.636 [2024-07-22 15:04:47.231350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.636 [2024-07-22 15:04:47.231357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.636 [2024-07-22 15:04:47.235649] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.636 [2024-07-22 15:04:47.235688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.636 [2024-07-22 15:04:47.235696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.636 [2024-07-22 15:04:47.240032] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.636 [2024-07-22 15:04:47.240060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.636 [2024-07-22 15:04:47.240067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.636 [2024-07-22 15:04:47.244100] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.636 [2024-07-22 15:04:47.244130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.636 [2024-07-22 15:04:47.244138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.636 [2024-07-22 15:04:47.247822] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.636 [2024-07-22 15:04:47.247852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.636 [2024-07-22 15:04:47.247859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.636 [2024-07-22 15:04:47.251935] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.636 [2024-07-22 15:04:47.251965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.636 [2024-07-22 15:04:47.251972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.636 [2024-07-22 15:04:47.256149] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.636 [2024-07-22 15:04:47.256177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.637 [2024-07-22 15:04:47.256184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.637 [2024-07-22 15:04:47.260419] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.637 [2024-07-22 15:04:47.260455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.637 [2024-07-22 15:04:47.260477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.896 [2024-07-22 15:04:47.264689] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.896 [2024-07-22 15:04:47.264725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.896 [2024-07-22 15:04:47.264734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.896 [2024-07-22 15:04:47.269391] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.896 [2024-07-22 15:04:47.269423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.896 [2024-07-22 15:04:47.269432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.896 [2024-07-22 15:04:47.272312] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.896 [2024-07-22 15:04:47.272342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.896 [2024-07-22 15:04:47.272350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.896 [2024-07-22 15:04:47.277382] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.896 [2024-07-22 15:04:47.277414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.896 [2024-07-22 15:04:47.277422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.896 [2024-07-22 15:04:47.280500] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.896 [2024-07-22 15:04:47.280529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.896 [2024-07-22 15:04:47.280537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.896 [2024-07-22 15:04:47.284904] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.896 [2024-07-22 15:04:47.284935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.896 [2024-07-22 15:04:47.284943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.896 [2024-07-22 15:04:47.288661] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.896 [2024-07-22 15:04:47.288696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.896 [2024-07-22 15:04:47.288704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.896 [2024-07-22 15:04:47.292556] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.896 [2024-07-22 15:04:47.292589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.896 [2024-07-22 15:04:47.292604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.896 [2024-07-22 15:04:47.296256] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.896 [2024-07-22 15:04:47.296289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.896 [2024-07-22 15:04:47.296297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.896 [2024-07-22 15:04:47.300669] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.896 [2024-07-22 15:04:47.300709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.896 [2024-07-22 15:04:47.300718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.896 [2024-07-22 15:04:47.304233] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.896 [2024-07-22 15:04:47.304261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.896 [2024-07-22 15:04:47.304268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.896 [2024-07-22 15:04:47.308481] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.896 [2024-07-22 15:04:47.308511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.896 [2024-07-22 15:04:47.308519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.896 [2024-07-22 15:04:47.313802] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.896 [2024-07-22 15:04:47.313848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.896 [2024-07-22 15:04:47.313855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.896 [2024-07-22 15:04:47.318819] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.896 [2024-07-22 15:04:47.318852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.896 [2024-07-22 15:04:47.318860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.896 [2024-07-22 15:04:47.322371] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.896 [2024-07-22 15:04:47.322398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.896 [2024-07-22 15:04:47.322406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.896 [2024-07-22 15:04:47.326761] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.896 [2024-07-22 15:04:47.326791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.896 [2024-07-22 15:04:47.326799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.896 [2024-07-22 15:04:47.329924] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.896 [2024-07-22 15:04:47.329953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.896 [2024-07-22 15:04:47.329960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.896 [2024-07-22 15:04:47.334383] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.896 [2024-07-22 15:04:47.334412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.896 [2024-07-22 15:04:47.334419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.896 [2024-07-22 15:04:47.338233] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.897 [2024-07-22 15:04:47.338263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.897 [2024-07-22 15:04:47.338270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.897 [2024-07-22 15:04:47.341789] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.897 [2024-07-22 15:04:47.341819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.897 [2024-07-22 15:04:47.341827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.897 [2024-07-22 15:04:47.345806] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.897 [2024-07-22 15:04:47.345836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.897 [2024-07-22 15:04:47.345844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.897 [2024-07-22 15:04:47.349139] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.897 [2024-07-22 15:04:47.349169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.897 [2024-07-22 15:04:47.349177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.897 [2024-07-22 15:04:47.353254] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.897 [2024-07-22 15:04:47.353283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.897 [2024-07-22 15:04:47.353291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.897 [2024-07-22 15:04:47.357004] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.897 [2024-07-22 15:04:47.357035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.897 [2024-07-22 15:04:47.357043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.897 [2024-07-22 15:04:47.360874] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.897 [2024-07-22 15:04:47.360904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.897 [2024-07-22 15:04:47.360912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.897 [2024-07-22 15:04:47.365684] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.897 [2024-07-22 15:04:47.365712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.897 [2024-07-22 15:04:47.365720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.897 [2024-07-22 15:04:47.370013] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.897 [2024-07-22 15:04:47.370041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.897 [2024-07-22 15:04:47.370049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.897 [2024-07-22 15:04:47.373089] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.897 [2024-07-22 15:04:47.373120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.897 [2024-07-22 15:04:47.373128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.897 [2024-07-22 15:04:47.377773] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.897 [2024-07-22 15:04:47.377803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.897 [2024-07-22 15:04:47.377810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.897 [2024-07-22 15:04:47.381290] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.897 [2024-07-22 15:04:47.381320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.897 [2024-07-22 15:04:47.381327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.897 [2024-07-22 15:04:47.385180] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.897 [2024-07-22 15:04:47.385209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.897 [2024-07-22 15:04:47.385217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.897 [2024-07-22 15:04:47.389785] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.897 [2024-07-22 15:04:47.389816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.897 [2024-07-22 15:04:47.389824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.897 [2024-07-22 15:04:47.394417] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.897 [2024-07-22 15:04:47.394446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.897 [2024-07-22 15:04:47.394454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.897 [2024-07-22 15:04:47.397741] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.897 [2024-07-22 15:04:47.397773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.897 [2024-07-22 15:04:47.397781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.897 [2024-07-22 15:04:47.402152] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.897 [2024-07-22 15:04:47.402186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.897 [2024-07-22 15:04:47.402194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.897 [2024-07-22 15:04:47.405733] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.897 [2024-07-22 15:04:47.405763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.897 [2024-07-22 15:04:47.405772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.897 [2024-07-22 15:04:47.410519] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.897 [2024-07-22 15:04:47.410551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.897 [2024-07-22 15:04:47.410558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.897 [2024-07-22 15:04:47.414107] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.897 [2024-07-22 15:04:47.414136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.897 [2024-07-22 15:04:47.414144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.897 [2024-07-22 15:04:47.418231] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.897 [2024-07-22 15:04:47.418261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.897 [2024-07-22 15:04:47.418269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.897 [2024-07-22 15:04:47.422418] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.897 [2024-07-22 15:04:47.422449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.897 [2024-07-22 15:04:47.422456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.897 [2024-07-22 15:04:47.426039] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.897 [2024-07-22 15:04:47.426068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.897 [2024-07-22 15:04:47.426076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.897 [2024-07-22 15:04:47.430537] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.897 [2024-07-22 15:04:47.430565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.897 [2024-07-22 15:04:47.430574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.897 [2024-07-22 15:04:47.434323] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.897 [2024-07-22 15:04:47.434353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.897 [2024-07-22 15:04:47.434360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.897 [2024-07-22 15:04:47.437991] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.897 [2024-07-22 15:04:47.438020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.897 [2024-07-22 15:04:47.438027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.897 [2024-07-22 15:04:47.442349] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.897 [2024-07-22 15:04:47.442379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.897 [2024-07-22 15:04:47.442387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.898 [2024-07-22 15:04:47.446528] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.898 [2024-07-22 15:04:47.446559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.898 [2024-07-22 15:04:47.446566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.898 [2024-07-22 15:04:47.450251] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.898 [2024-07-22 15:04:47.450281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.898 [2024-07-22 15:04:47.450289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.898 [2024-07-22 15:04:47.454074] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.898 [2024-07-22 15:04:47.454105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.898 [2024-07-22 15:04:47.454114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.898 [2024-07-22 15:04:47.457864] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.898 [2024-07-22 15:04:47.457894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.898 [2024-07-22 15:04:47.457902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.898 [2024-07-22 15:04:47.461647] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.898 [2024-07-22 15:04:47.461688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.898 [2024-07-22 15:04:47.461697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.898 [2024-07-22 15:04:47.465755] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.898 [2024-07-22 15:04:47.465790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.898 [2024-07-22 15:04:47.465798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.898 [2024-07-22 15:04:47.470328] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.898 [2024-07-22 15:04:47.470356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.898 [2024-07-22 15:04:47.470363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.898 [2024-07-22 15:04:47.473764] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.898 [2024-07-22 15:04:47.473793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.898 [2024-07-22 15:04:47.473800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.898 [2024-07-22 15:04:47.478151] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.898 [2024-07-22 15:04:47.478179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.898 [2024-07-22 15:04:47.478186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.898 [2024-07-22 15:04:47.482047] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.898 [2024-07-22 15:04:47.482073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.898 [2024-07-22 15:04:47.482080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.898 [2024-07-22 15:04:47.486612] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.898 [2024-07-22 15:04:47.486642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.898 [2024-07-22 15:04:47.486649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.898 [2024-07-22 15:04:47.490330] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.898 [2024-07-22 15:04:47.490361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.898 [2024-07-22 15:04:47.490368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.898 [2024-07-22 15:04:47.493887] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.898 [2024-07-22 15:04:47.493915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.898 [2024-07-22 15:04:47.493923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.898 [2024-07-22 15:04:47.497534] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.898 [2024-07-22 15:04:47.497566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.898 [2024-07-22 15:04:47.497573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.898 [2024-07-22 15:04:47.501871] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.898 [2024-07-22 15:04:47.501903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.898 [2024-07-22 15:04:47.501911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.898 [2024-07-22 15:04:47.505610] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.898 [2024-07-22 15:04:47.505640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.898 [2024-07-22 15:04:47.505647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:27.898 [2024-07-22 15:04:47.509611] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.898 [2024-07-22 15:04:47.509641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.898 [2024-07-22 15:04:47.509649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:27.898 [2024-07-22 15:04:47.514684] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.898 [2024-07-22 15:04:47.514723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.898 [2024-07-22 15:04:47.514731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:27.898 [2024-07-22 15:04:47.518181] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.898 [2024-07-22 15:04:47.518209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.898 [2024-07-22 15:04:47.518216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:27.898 [2024-07-22 15:04:47.522202] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:27.898 [2024-07-22 15:04:47.522237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:27.898 [2024-07-22 15:04:47.522245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:28.157 [2024-07-22 15:04:47.527004] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.157 [2024-07-22 15:04:47.527038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.157 [2024-07-22 15:04:47.527047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:28.157 [2024-07-22 15:04:47.531492] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.157 [2024-07-22 15:04:47.531523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.158 [2024-07-22 15:04:47.531531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:28.158 [2024-07-22 15:04:47.534529] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.158 [2024-07-22 15:04:47.534560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.158 [2024-07-22 15:04:47.534567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:28.158 [2024-07-22 15:04:47.538746] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.158 [2024-07-22 15:04:47.538774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.158 [2024-07-22 15:04:47.538782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:28.158 [2024-07-22 15:04:47.542519] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.158 [2024-07-22 15:04:47.542550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.158 [2024-07-22 15:04:47.542557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:28.158 [2024-07-22 15:04:47.546367] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.158 [2024-07-22 15:04:47.546397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.158 [2024-07-22 15:04:47.546405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:28.158 [2024-07-22 15:04:47.550310] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.158 [2024-07-22 15:04:47.550338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.158 [2024-07-22 15:04:47.550346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:28.158 [2024-07-22 15:04:47.554503] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.158 [2024-07-22 15:04:47.554532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.158 [2024-07-22 15:04:47.554539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:28.158 [2024-07-22 15:04:47.559122] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.158 [2024-07-22 15:04:47.559152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.158 [2024-07-22 15:04:47.559161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:28.158 [2024-07-22 15:04:47.562425] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.158 [2024-07-22 15:04:47.562455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.158 [2024-07-22 15:04:47.562463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:28.158 [2024-07-22 15:04:47.566915] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.158 [2024-07-22 15:04:47.566944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.158 [2024-07-22 15:04:47.566952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:28.158 [2024-07-22 15:04:47.570072] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.158 [2024-07-22 15:04:47.570111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.158 [2024-07-22 15:04:47.570121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:28.158 [2024-07-22 15:04:47.574285] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.158 [2024-07-22 15:04:47.574320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.158 [2024-07-22 15:04:47.574328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:28.158 [2024-07-22 15:04:47.579199] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.158 [2024-07-22 15:04:47.579235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.158 [2024-07-22 15:04:47.579255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:28.158 [2024-07-22 15:04:47.582290] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.158 [2024-07-22 15:04:47.582323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.158 [2024-07-22 15:04:47.582333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:28.158 [2024-07-22 15:04:47.585781] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.158 [2024-07-22 15:04:47.585820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.158 [2024-07-22 15:04:47.585829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:28.158 [2024-07-22 15:04:47.590390] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.158 [2024-07-22 15:04:47.590435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.158 [2024-07-22 15:04:47.590445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:28.158 [2024-07-22 15:04:47.594640] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.158 [2024-07-22 15:04:47.594684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.158 [2024-07-22 15:04:47.594693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:28.158 [2024-07-22 15:04:47.599610] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.158 [2024-07-22 15:04:47.599639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.158 [2024-07-22 15:04:47.599647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:28.158 [2024-07-22 15:04:47.603197] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.158 [2024-07-22 15:04:47.603224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.158 [2024-07-22 15:04:47.603231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:28.158 [2024-07-22 15:04:47.607465] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.158 [2024-07-22 15:04:47.607495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.158 [2024-07-22 15:04:47.607502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:28.158 [2024-07-22 15:04:47.611286] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.158 [2024-07-22 15:04:47.611315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.158 [2024-07-22 15:04:47.611322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:28.158 [2024-07-22 15:04:47.615201] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.158 [2024-07-22 15:04:47.615230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.158 [2024-07-22 15:04:47.615237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:28.158 [2024-07-22 15:04:47.619393] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.158 [2024-07-22 15:04:47.619429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.158 [2024-07-22 15:04:47.619437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:28.158 [2024-07-22 15:04:47.623877] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.158 [2024-07-22 15:04:47.623909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.158 [2024-07-22 15:04:47.623917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:28.158 [2024-07-22 15:04:47.626950] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.158 [2024-07-22 15:04:47.626981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.158 [2024-07-22 15:04:47.626990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:28.158 [2024-07-22 15:04:47.631634] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.158 [2024-07-22 15:04:47.631676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.158 [2024-07-22 15:04:47.631684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:28.158 [2024-07-22 15:04:47.636587] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.158 [2024-07-22 15:04:47.636644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.158 [2024-07-22 15:04:47.636655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:28.158 [2024-07-22 15:04:47.639784] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.159 [2024-07-22 15:04:47.639813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.159 [2024-07-22 15:04:47.639821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:28.159 [2024-07-22 15:04:47.644722] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.159 [2024-07-22 15:04:47.644751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.159 [2024-07-22 15:04:47.644758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:28.159 [2024-07-22 15:04:47.649557] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.159 [2024-07-22 15:04:47.649588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.159 [2024-07-22 15:04:47.649595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:28.159 [2024-07-22 15:04:47.652731] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.159 [2024-07-22 15:04:47.652755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.159 [2024-07-22 15:04:47.652762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:28.159 [2024-07-22 15:04:47.657258] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.159 [2024-07-22 15:04:47.657288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.159 [2024-07-22 15:04:47.657295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:28.159 [2024-07-22 15:04:47.661666] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.159 [2024-07-22 15:04:47.661727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.159 [2024-07-22 15:04:47.661736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:28.159 [2024-07-22 15:04:47.666328] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.159 [2024-07-22 15:04:47.666358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.159 [2024-07-22 15:04:47.666365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:28.159 [2024-07-22 15:04:47.669703] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.159 [2024-07-22 15:04:47.669730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.159 [2024-07-22 15:04:47.669738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:28.159 [2024-07-22 15:04:47.674461] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.159 [2024-07-22 15:04:47.674491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.159 [2024-07-22 15:04:47.674499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:28.159 [2024-07-22 15:04:47.678043] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.159 [2024-07-22 15:04:47.678070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.159 [2024-07-22 15:04:47.678077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:28.159 [2024-07-22 15:04:47.682245] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.159 [2024-07-22 15:04:47.682273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.159 [2024-07-22 15:04:47.682280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:28.159 [2024-07-22 15:04:47.686778] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.159 [2024-07-22 15:04:47.686807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.159 [2024-07-22 15:04:47.686814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:28.159 [2024-07-22 15:04:47.690024] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.159 [2024-07-22 15:04:47.690054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.159 [2024-07-22 15:04:47.690061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:28.159 [2024-07-22 15:04:47.693925] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.159 [2024-07-22 15:04:47.693955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.159 [2024-07-22 15:04:47.693963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:28.159 [2024-07-22 15:04:47.698638] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.159 [2024-07-22 15:04:47.698677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.159 [2024-07-22 15:04:47.698685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:28.159 [2024-07-22 15:04:47.701637] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.159 [2024-07-22 15:04:47.701677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.159 [2024-07-22 15:04:47.701685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:28.159 [2024-07-22 15:04:47.705655] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.159 [2024-07-22 15:04:47.705693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.159 [2024-07-22 15:04:47.705701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:28.159 [2024-07-22 15:04:47.709689] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.159 [2024-07-22 15:04:47.709716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.159 [2024-07-22 15:04:47.709723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:28.159 [2024-07-22 15:04:47.713686] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.159 [2024-07-22 15:04:47.713713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.159 [2024-07-22 15:04:47.713721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:28.159 [2024-07-22 15:04:47.717342] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.159 [2024-07-22 15:04:47.717372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.159 [2024-07-22 15:04:47.717379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:28.159 [2024-07-22 15:04:47.721396] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.159 [2024-07-22 15:04:47.721426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.159 [2024-07-22 15:04:47.721434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:28.159 [2024-07-22 15:04:47.725028] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.159 [2024-07-22 15:04:47.725058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.159 [2024-07-22 15:04:47.725065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:28.159 [2024-07-22 15:04:47.727992] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.159 [2024-07-22 15:04:47.728020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.159 [2024-07-22 15:04:47.728028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:28.159 [2024-07-22 15:04:47.731908] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.159 [2024-07-22 15:04:47.731937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.159 [2024-07-22 15:04:47.731945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:28.159 [2024-07-22 15:04:47.735780] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.159 [2024-07-22 15:04:47.735810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.159 [2024-07-22 15:04:47.735818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:28.159 [2024-07-22 15:04:47.740006] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.159 [2024-07-22 15:04:47.740036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.159 [2024-07-22 15:04:47.740043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:28.159 [2024-07-22 15:04:47.744058] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.159 [2024-07-22 15:04:47.744088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.160 [2024-07-22 15:04:47.744095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:28.160 [2024-07-22 15:04:47.747431] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.160 [2024-07-22 15:04:47.747458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.160 [2024-07-22 15:04:47.747466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:28.160 [2024-07-22 15:04:47.752016] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.160 [2024-07-22 15:04:47.752045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.160 [2024-07-22 15:04:47.752053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:28.160 [2024-07-22 15:04:47.756552] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.160 [2024-07-22 15:04:47.756584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.160 [2024-07-22 15:04:47.756592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:28.160 [2024-07-22 15:04:47.760183] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.160 [2024-07-22 15:04:47.760212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.160 [2024-07-22 15:04:47.760220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:28.160 [2024-07-22 15:04:47.765106] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.160 [2024-07-22 15:04:47.765138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.160 [2024-07-22 15:04:47.765146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:28.160 [2024-07-22 15:04:47.769196] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.160 [2024-07-22 15:04:47.769229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.160 [2024-07-22 15:04:47.769238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:28.160 [2024-07-22 15:04:47.772592] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.160 [2024-07-22 15:04:47.772643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.160 [2024-07-22 15:04:47.772651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:28.160 [2024-07-22 15:04:47.777109] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.160 [2024-07-22 15:04:47.777142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.160 [2024-07-22 15:04:47.777150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:28.160 [2024-07-22 15:04:47.781202] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.160 [2024-07-22 15:04:47.781234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.160 [2024-07-22 15:04:47.781242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:28.160 [2024-07-22 15:04:47.785219] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.160 [2024-07-22 15:04:47.785253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.160 [2024-07-22 15:04:47.785261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:28.420 [2024-07-22 15:04:47.790212] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.420 [2024-07-22 15:04:47.790246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.420 [2024-07-22 15:04:47.790254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:28.420 [2024-07-22 15:04:47.793988] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.420 [2024-07-22 15:04:47.794028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.420 [2024-07-22 15:04:47.794041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:28.420 [2024-07-22 15:04:47.798127] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.421 [2024-07-22 15:04:47.798160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.421 [2024-07-22 15:04:47.798168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:28.421 [2024-07-22 15:04:47.803133] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.421 [2024-07-22 15:04:47.803166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.421 [2024-07-22 15:04:47.803174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:28.421 [2024-07-22 15:04:47.806591] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.421 [2024-07-22 15:04:47.806626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.421 [2024-07-22 15:04:47.806635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:28.421 [2024-07-22 15:04:47.811445] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.421 [2024-07-22 15:04:47.811479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.421 [2024-07-22 15:04:47.811488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:28.421 [2024-07-22 15:04:47.815728] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.421 [2024-07-22 15:04:47.815761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.421 [2024-07-22 15:04:47.815770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:28.421 [2024-07-22 15:04:47.819766] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.421 [2024-07-22 15:04:47.819799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.421 [2024-07-22 15:04:47.819808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:28.421 [2024-07-22 15:04:47.824094] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.421 [2024-07-22 15:04:47.824126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.421 [2024-07-22 15:04:47.824135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:28.421 [2024-07-22 15:04:47.827928] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.421 [2024-07-22 15:04:47.827968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.421 [2024-07-22 15:04:47.827978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:28.421 [2024-07-22 15:04:47.832069] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.421 [2024-07-22 15:04:47.832105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.421 [2024-07-22 15:04:47.832115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:28.421 [2024-07-22 15:04:47.836383] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.421 [2024-07-22 15:04:47.836422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.421 [2024-07-22 15:04:47.836433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:28.421 [2024-07-22 15:04:47.839870] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.421 [2024-07-22 15:04:47.839903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.421 [2024-07-22 15:04:47.839912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:28.421 [2024-07-22 15:04:47.844002] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.421 [2024-07-22 15:04:47.844035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.421 [2024-07-22 15:04:47.844044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:28.421 [2024-07-22 15:04:47.848655] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.421 [2024-07-22 15:04:47.848700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.421 [2024-07-22 15:04:47.848710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:28.421 [2024-07-22 15:04:47.852381] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.421 [2024-07-22 15:04:47.852414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.421 [2024-07-22 15:04:47.852424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:28.421 [2024-07-22 15:04:47.856041] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.421 [2024-07-22 15:04:47.856074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.421 [2024-07-22 15:04:47.856083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:28.421 [2024-07-22 15:04:47.860350] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.421 [2024-07-22 15:04:47.860381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.421 [2024-07-22 15:04:47.860391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:28.421 [2024-07-22 15:04:47.864191] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.421 [2024-07-22 15:04:47.864223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.421 [2024-07-22 15:04:47.864233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:28.421 [2024-07-22 15:04:47.869023] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.421 [2024-07-22 15:04:47.869053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.421 [2024-07-22 15:04:47.869061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:28.421 [2024-07-22 15:04:47.872925] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.421 [2024-07-22 15:04:47.872955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.421 [2024-07-22 15:04:47.872963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:28.421 [2024-07-22 15:04:47.877364] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.422 [2024-07-22 15:04:47.877399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.422 [2024-07-22 15:04:47.877408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:28.422 [2024-07-22 15:04:47.880943] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.422 [2024-07-22 15:04:47.880978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.422 [2024-07-22 15:04:47.880986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:28.422 [2024-07-22 15:04:47.885247] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.422 [2024-07-22 15:04:47.885280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.422 [2024-07-22 15:04:47.885288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:28.422 [2024-07-22 15:04:47.888450] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.422 [2024-07-22 15:04:47.888480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.422 [2024-07-22 15:04:47.888488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:28.422 [2024-07-22 15:04:47.892972] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.422 [2024-07-22 15:04:47.893003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.422 [2024-07-22 15:04:47.893011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:28.422 [2024-07-22 15:04:47.896836] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.422 [2024-07-22 15:04:47.896866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.422 [2024-07-22 15:04:47.896874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:28.422 [2024-07-22 15:04:47.900947] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.422 [2024-07-22 15:04:47.900978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.422 [2024-07-22 15:04:47.900986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:28.422 [2024-07-22 15:04:47.905109] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.422 [2024-07-22 15:04:47.905140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.422 [2024-07-22 15:04:47.905149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:28.422 [2024-07-22 15:04:47.908417] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.422 [2024-07-22 15:04:47.908445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.422 [2024-07-22 15:04:47.908453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:28.422 [2024-07-22 15:04:47.911908] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.422 [2024-07-22 15:04:47.911937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.422 [2024-07-22 15:04:47.911945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:28.422 [2024-07-22 15:04:47.915186] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.422 [2024-07-22 15:04:47.915216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.422 [2024-07-22 15:04:47.915224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:28.422 [2024-07-22 15:04:47.918802] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.422 [2024-07-22 15:04:47.918830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.422 [2024-07-22 15:04:47.918837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:28.422 [2024-07-22 15:04:47.922621] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.422 [2024-07-22 15:04:47.922649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.422 [2024-07-22 15:04:47.922657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:28.422 [2024-07-22 15:04:47.927234] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.422 [2024-07-22 15:04:47.927263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.422 [2024-07-22 15:04:47.927270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:28.422 [2024-07-22 15:04:47.931087] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.422 [2024-07-22 15:04:47.931115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.422 [2024-07-22 15:04:47.931123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:28.422 [2024-07-22 15:04:47.934937] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.422 [2024-07-22 15:04:47.934966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.422 [2024-07-22 15:04:47.934974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:28.422 [2024-07-22 15:04:47.938678] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.422 [2024-07-22 15:04:47.938706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.422 [2024-07-22 15:04:47.938713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:28.422 [2024-07-22 15:04:47.942652] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.422 [2024-07-22 15:04:47.942688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.422 [2024-07-22 15:04:47.942695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:28.422 [2024-07-22 15:04:47.946991] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.422 [2024-07-22 15:04:47.947020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.422 [2024-07-22 15:04:47.947028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:28.422 [2024-07-22 15:04:47.951392] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.422 [2024-07-22 15:04:47.951423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.422 [2024-07-22 15:04:47.951431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:28.423 [2024-07-22 15:04:47.954356] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.423 [2024-07-22 15:04:47.954385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.423 [2024-07-22 15:04:47.954393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:28.423 [2024-07-22 15:04:47.959682] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.423 [2024-07-22 15:04:47.959708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.423 [2024-07-22 15:04:47.959716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:28.423 [2024-07-22 15:04:47.963099] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.423 [2024-07-22 15:04:47.963128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.423 [2024-07-22 15:04:47.963135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:28.423 [2024-07-22 15:04:47.967047] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.423 [2024-07-22 15:04:47.967076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.423 [2024-07-22 15:04:47.967084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:28.423 [2024-07-22 15:04:47.971512] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.423 [2024-07-22 15:04:47.971541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.423 [2024-07-22 15:04:47.971548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:28.423 [2024-07-22 15:04:47.974860] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.423 [2024-07-22 15:04:47.974890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.423 [2024-07-22 15:04:47.974898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:28.423 [2024-07-22 15:04:47.979625] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.423 [2024-07-22 15:04:47.979655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.423 [2024-07-22 15:04:47.979663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:28.423 [2024-07-22 15:04:47.983384] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.423 [2024-07-22 15:04:47.983413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.423 [2024-07-22 15:04:47.983421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:28.423 [2024-07-22 15:04:47.987663] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.423 [2024-07-22 15:04:47.987701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.423 [2024-07-22 15:04:47.987709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:28.423 [2024-07-22 15:04:47.991110] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.423 [2024-07-22 15:04:47.991140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.423 [2024-07-22 15:04:47.991148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:28.423 [2024-07-22 15:04:47.994560] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.423 [2024-07-22 15:04:47.994590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.423 [2024-07-22 15:04:47.994598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:28.423 [2024-07-22 15:04:47.998728] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.423 [2024-07-22 15:04:47.998755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.423 [2024-07-22 15:04:47.998762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:28.423 [2024-07-22 15:04:48.002649] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.423 [2024-07-22 15:04:48.002693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.423 [2024-07-22 15:04:48.002701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:28.423 [2024-07-22 15:04:48.006580] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.423 [2024-07-22 15:04:48.006611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.423 [2024-07-22 15:04:48.006618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:28.423 [2024-07-22 15:04:48.010451] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.423 [2024-07-22 15:04:48.010482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.423 [2024-07-22 15:04:48.010489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:28.423 [2024-07-22 15:04:48.013916] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.423 [2024-07-22 15:04:48.013946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.423 [2024-07-22 15:04:48.013954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:28.423 [2024-07-22 15:04:48.017976] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.423 [2024-07-22 15:04:48.018007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.423 [2024-07-22 15:04:48.018015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:28.423 [2024-07-22 15:04:48.022407] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.423 [2024-07-22 15:04:48.022436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.423 [2024-07-22 15:04:48.022444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:28.424 [2024-07-22 15:04:48.025553] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.424 [2024-07-22 15:04:48.025583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.424 [2024-07-22 15:04:48.025591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:28.424 [2024-07-22 15:04:48.030099] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.424 [2024-07-22 15:04:48.030128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.424 [2024-07-22 15:04:48.030135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:28.424 [2024-07-22 15:04:48.034845] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.424 [2024-07-22 15:04:48.034874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.424 [2024-07-22 15:04:48.034881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:28.424 [2024-07-22 15:04:48.038921] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.424 [2024-07-22 15:04:48.038947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.424 [2024-07-22 15:04:48.038954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:28.424 [2024-07-22 15:04:48.042459] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.424 [2024-07-22 15:04:48.042488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.424 [2024-07-22 15:04:48.042495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:28.424 [2024-07-22 15:04:48.047025] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.424 [2024-07-22 15:04:48.047058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.424 [2024-07-22 15:04:48.047066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:28.683 [2024-07-22 15:04:48.050070] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.683 [2024-07-22 15:04:48.050121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.683 [2024-07-22 15:04:48.050132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:28.683 [2024-07-22 15:04:48.054689] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.683 [2024-07-22 15:04:48.054740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.683 [2024-07-22 15:04:48.054748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:28.683 [2024-07-22 15:04:48.059033] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.683 [2024-07-22 15:04:48.059064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.683 [2024-07-22 15:04:48.059072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:28.683 [2024-07-22 15:04:48.062940] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.683 [2024-07-22 15:04:48.062969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.683 [2024-07-22 15:04:48.062976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:28.683 [2024-07-22 15:04:48.066338] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.683 [2024-07-22 15:04:48.066367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.683 [2024-07-22 15:04:48.066375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:28.683 [2024-07-22 15:04:48.070868] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.683 [2024-07-22 15:04:48.070895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.683 [2024-07-22 15:04:48.070903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:28.683 [2024-07-22 15:04:48.075130] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.683 [2024-07-22 15:04:48.075158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.683 [2024-07-22 15:04:48.075166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:28.683 [2024-07-22 15:04:48.078752] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.683 [2024-07-22 15:04:48.078781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.683 [2024-07-22 15:04:48.078788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:28.683 [2024-07-22 15:04:48.082831] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.683 [2024-07-22 15:04:48.082859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.683 [2024-07-22 15:04:48.082867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:28.683 [2024-07-22 15:04:48.086743] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.683 [2024-07-22 15:04:48.086779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.683 [2024-07-22 15:04:48.086787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:28.683 [2024-07-22 15:04:48.091295] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.683 [2024-07-22 15:04:48.091329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.683 [2024-07-22 15:04:48.091337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:28.683 [2024-07-22 15:04:48.094709] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.683 [2024-07-22 15:04:48.094738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.683 [2024-07-22 15:04:48.094746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:28.683 [2024-07-22 15:04:48.098752] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.683 [2024-07-22 15:04:48.098780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.683 [2024-07-22 15:04:48.098788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:28.683 [2024-07-22 15:04:48.103636] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.683 [2024-07-22 15:04:48.103681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.683 [2024-07-22 15:04:48.103690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:28.683 [2024-07-22 15:04:48.106964] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.683 [2024-07-22 15:04:48.106991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.683 [2024-07-22 15:04:48.106999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:28.683 [2024-07-22 15:04:48.111825] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.683 [2024-07-22 15:04:48.111854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.683 [2024-07-22 15:04:48.111862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:28.683 [2024-07-22 15:04:48.116729] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.683 [2024-07-22 15:04:48.116757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.683 [2024-07-22 15:04:48.116764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:28.684 [2024-07-22 15:04:48.120033] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.684 [2024-07-22 15:04:48.120059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.684 [2024-07-22 15:04:48.120066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:28.684 [2024-07-22 15:04:48.124115] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.684 [2024-07-22 15:04:48.124146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.684 [2024-07-22 15:04:48.124154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:28.684 [2024-07-22 15:04:48.128852] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.684 [2024-07-22 15:04:48.128882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.684 [2024-07-22 15:04:48.128890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:28.684 [2024-07-22 15:04:48.132592] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.684 [2024-07-22 15:04:48.132628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.684 [2024-07-22 15:04:48.132640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:28.684 [2024-07-22 15:04:48.136222] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.684 [2024-07-22 15:04:48.136256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.684 [2024-07-22 15:04:48.136265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:28.684 [2024-07-22 15:04:48.140057] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.684 [2024-07-22 15:04:48.140090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.684 [2024-07-22 15:04:48.140100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:28.684 [2024-07-22 15:04:48.144973] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.684 [2024-07-22 15:04:48.145007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.684 [2024-07-22 15:04:48.145015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:28.684 [2024-07-22 15:04:48.149195] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.684 [2024-07-22 15:04:48.149224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.684 [2024-07-22 15:04:48.149232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:28.684 [2024-07-22 15:04:48.152160] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.684 [2024-07-22 15:04:48.152188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.684 [2024-07-22 15:04:48.152196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:28.684 [2024-07-22 15:04:48.156081] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.684 [2024-07-22 15:04:48.156109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.684 [2024-07-22 15:04:48.156117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:28.684 [2024-07-22 15:04:48.161214] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.684 [2024-07-22 15:04:48.161244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.684 [2024-07-22 15:04:48.161253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:28.684 [2024-07-22 15:04:48.164369] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.684 [2024-07-22 15:04:48.164396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.684 [2024-07-22 15:04:48.164404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:28.684 [2024-07-22 15:04:48.168322] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.684 [2024-07-22 15:04:48.168350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.684 [2024-07-22 15:04:48.168358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:28.684 [2024-07-22 15:04:48.172971] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.684 [2024-07-22 15:04:48.173002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.684 [2024-07-22 15:04:48.173010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:28.684 [2024-07-22 15:04:48.177233] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.684 [2024-07-22 15:04:48.177261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.684 [2024-07-22 15:04:48.177269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:28.684 [2024-07-22 15:04:48.181355] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.684 [2024-07-22 15:04:48.181385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.684 [2024-07-22 15:04:48.181393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:28.684 [2024-07-22 15:04:48.185071] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.684 [2024-07-22 15:04:48.185100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.684 [2024-07-22 15:04:48.185107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:28.684 [2024-07-22 15:04:48.189442] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.684 [2024-07-22 15:04:48.189474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.684 [2024-07-22 15:04:48.189482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:28.684 [2024-07-22 15:04:48.193747] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.684 [2024-07-22 15:04:48.193775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.684 [2024-07-22 15:04:48.193782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:28.684 [2024-07-22 15:04:48.197083] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.684 [2024-07-22 15:04:48.197112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.684 [2024-07-22 15:04:48.197120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:28.684 [2024-07-22 15:04:48.201701] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.684 [2024-07-22 15:04:48.201729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.684 [2024-07-22 15:04:48.201738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:28.684 [2024-07-22 15:04:48.204706] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.684 [2024-07-22 15:04:48.204733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.684 [2024-07-22 15:04:48.204741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:28.684 [2024-07-22 15:04:48.208375] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.684 [2024-07-22 15:04:48.208403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.684 [2024-07-22 15:04:48.208411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:28.684 [2024-07-22 15:04:48.212441] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.684 [2024-07-22 15:04:48.212470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.684 [2024-07-22 15:04:48.212478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:28.684 [2024-07-22 15:04:48.217386] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.684 [2024-07-22 15:04:48.217417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.684 [2024-07-22 15:04:48.217425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:28.684 [2024-07-22 15:04:48.222032] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21c6080) 00:51:28.684 [2024-07-22 15:04:48.222061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:28.684 [2024-07-22 15:04:48.222069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:28.684 00:51:28.684 Latency(us) 00:51:28.684 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:51:28.685 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:51:28.685 nvme0n1 : 2.00 7581.62 947.70 0.00 0.00 2107.39 522.28 9615.76 00:51:28.685 =================================================================================================================== 00:51:28.685 Total : 7581.62 947.70 0.00 0.00 2107.39 522.28 9615.76 00:51:28.685 0 00:51:28.685 15:04:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:51:28.685 15:04:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:51:28.685 15:04:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:51:28.685 | .driver_specific 00:51:28.685 | .nvme_error 00:51:28.685 | .status_code 00:51:28.685 | .command_transient_transport_error' 00:51:28.685 15:04:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:51:28.943 15:04:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 489 > 0 )) 00:51:28.943 15:04:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 111475 00:51:28.943 15:04:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 111475 ']' 00:51:28.943 15:04:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 111475 00:51:28.943 15:04:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:51:28.943 15:04:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:51:28.943 15:04:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 111475 00:51:28.943 15:04:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:51:28.943 15:04:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:51:28.943 killing process with pid 111475 00:51:28.943 15:04:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 111475' 00:51:28.943 15:04:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 111475 00:51:28.943 Received shutdown signal, test time was about 2.000000 seconds 00:51:28.944 00:51:28.944 Latency(us) 00:51:28.944 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:51:28.944 =================================================================================================================== 00:51:28.944 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:51:28.944 15:04:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 111475 00:51:29.203 15:04:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:51:29.203 15:04:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:51:29.203 15:04:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:51:29.203 15:04:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:51:29.203 15:04:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:51:29.203 15:04:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=111560 00:51:29.203 15:04:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 111560 /var/tmp/bperf.sock 00:51:29.203 15:04:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:51:29.203 15:04:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 111560 ']' 00:51:29.203 15:04:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:51:29.203 15:04:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:51:29.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:51:29.203 15:04:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:51:29.203 15:04:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:51:29.203 15:04:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:51:29.462 [2024-07-22 15:04:48.836103] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:51:29.462 [2024-07-22 15:04:48.836167] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111560 ] 00:51:29.462 [2024-07-22 15:04:48.975704] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:29.462 [2024-07-22 15:04:49.053892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:51:30.396 15:04:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:51:30.396 15:04:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:51:30.396 15:04:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:51:30.396 15:04:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:51:30.396 15:04:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:51:30.396 15:04:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:30.396 15:04:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:51:30.396 15:04:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:30.396 15:04:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:51:30.396 15:04:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:51:30.654 nvme0n1 00:51:30.654 15:04:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:51:30.654 15:04:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:30.654 15:04:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:51:30.654 15:04:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:30.654 15:04:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:51:30.654 15:04:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:51:30.654 Running I/O for 2 seconds... 00:51:30.654 [2024-07-22 15:04:50.261183] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190ee5c8 00:51:30.654 [2024-07-22 15:04:50.262392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:30.654 [2024-07-22 15:04:50.262443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:51:30.654 [2024-07-22 15:04:50.272650] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190ee5c8 00:51:30.654 [2024-07-22 15:04:50.274379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:30.654 [2024-07-22 15:04:50.274431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:30.654 [2024-07-22 15:04:50.279691] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f6cc8 00:51:30.654 [2024-07-22 15:04:50.280492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:3016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:30.654 [2024-07-22 15:04:50.280525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:51:30.914 [2024-07-22 15:04:50.290110] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e6738 00:51:30.914 [2024-07-22 15:04:50.291007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:30.914 [2024-07-22 15:04:50.291039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:51:30.914 [2024-07-22 15:04:50.300170] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190de038 00:51:30.914 [2024-07-22 15:04:50.301215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:14509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:30.914 [2024-07-22 15:04:50.301245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:51:30.914 [2024-07-22 15:04:50.309900] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f6458 00:51:30.914 [2024-07-22 15:04:50.310947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:11331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:30.914 [2024-07-22 15:04:50.310974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:51:30.914 [2024-07-22 15:04:50.319411] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f6458 00:51:30.914 [2024-07-22 15:04:50.320332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:30.914 [2024-07-22 15:04:50.320359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:51:30.914 [2024-07-22 15:04:50.328638] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f6458 00:51:30.914 [2024-07-22 15:04:50.329641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:30.914 [2024-07-22 15:04:50.329681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:51:30.914 [2024-07-22 15:04:50.337355] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190eb760 00:51:30.914 [2024-07-22 15:04:50.338487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:30.914 [2024-07-22 15:04:50.338520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:51:30.914 [2024-07-22 15:04:50.346981] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190fb048 00:51:30.914 [2024-07-22 15:04:50.348106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:12416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:30.914 [2024-07-22 15:04:50.348134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:51:30.914 [2024-07-22 15:04:50.356028] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190de038 00:51:30.914 [2024-07-22 15:04:50.357317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:30.914 [2024-07-22 15:04:50.357347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:51:30.914 [2024-07-22 15:04:50.364656] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e01f8 00:51:30.914 [2024-07-22 15:04:50.365661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:30.914 [2024-07-22 15:04:50.365707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:51:30.915 [2024-07-22 15:04:50.372476] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f7100 00:51:30.915 [2024-07-22 15:04:50.373762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:15465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:30.915 [2024-07-22 15:04:50.373796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:51:30.915 [2024-07-22 15:04:50.381073] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e3498 00:51:30.915 [2024-07-22 15:04:50.381853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:30.915 [2024-07-22 15:04:50.381882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:51:30.915 [2024-07-22 15:04:50.389900] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e0ea0 00:51:30.915 [2024-07-22 15:04:50.390663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:30.915 [2024-07-22 15:04:50.390697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:51:30.915 [2024-07-22 15:04:50.399236] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e3498 00:51:30.915 [2024-07-22 15:04:50.400352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:30.915 [2024-07-22 15:04:50.400379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:51:30.915 [2024-07-22 15:04:50.407865] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e9e10 00:51:30.915 [2024-07-22 15:04:50.408780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:30.915 [2024-07-22 15:04:50.408809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:30.915 [2024-07-22 15:04:50.416227] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e1710 00:51:30.915 [2024-07-22 15:04:50.417009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:30.915 [2024-07-22 15:04:50.417043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:30.915 [2024-07-22 15:04:50.424889] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190fda78 00:51:30.915 [2024-07-22 15:04:50.426007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:30.915 [2024-07-22 15:04:50.426043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:51:30.915 [2024-07-22 15:04:50.432361] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e0ea0 00:51:30.915 [2024-07-22 15:04:50.433015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:30.915 [2024-07-22 15:04:50.433045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:30.915 [2024-07-22 15:04:50.441616] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190ea680 00:51:30.915 [2024-07-22 15:04:50.442733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:30.915 [2024-07-22 15:04:50.442761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:51:30.915 [2024-07-22 15:04:50.450173] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e0630 00:51:30.915 [2024-07-22 15:04:50.451275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:30.915 [2024-07-22 15:04:50.451302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:51:30.915 [2024-07-22 15:04:50.457772] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e23b8 00:51:30.915 [2024-07-22 15:04:50.458908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:30.915 [2024-07-22 15:04:50.458935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:51:30.915 [2024-07-22 15:04:50.466254] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e49b0 00:51:30.915 [2024-07-22 15:04:50.467013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:30.915 [2024-07-22 15:04:50.467039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:51:30.915 [2024-07-22 15:04:50.474124] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190ed920 00:51:30.915 [2024-07-22 15:04:50.474887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:30.915 [2024-07-22 15:04:50.474912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:51:30.915 [2024-07-22 15:04:50.483463] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f9f68 00:51:30.915 [2024-07-22 15:04:50.484173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:30.915 [2024-07-22 15:04:50.484207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:51:30.915 [2024-07-22 15:04:50.492743] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190ee190 00:51:30.915 [2024-07-22 15:04:50.493504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:30.915 [2024-07-22 15:04:50.493533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:51:30.915 [2024-07-22 15:04:50.500976] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f0788 00:51:30.915 [2024-07-22 15:04:50.502452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:30.915 [2024-07-22 15:04:50.502480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:51:30.915 [2024-07-22 15:04:50.509504] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190ee5c8 00:51:30.915 [2024-07-22 15:04:50.510157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:30.915 [2024-07-22 15:04:50.510192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:51:30.915 [2024-07-22 15:04:50.520511] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e8088 00:51:30.915 [2024-07-22 15:04:50.521713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:30.915 [2024-07-22 15:04:50.521749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:51:30.915 [2024-07-22 15:04:50.529216] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f2948 00:51:30.915 [2024-07-22 15:04:50.530395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:30.915 [2024-07-22 15:04:50.530430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:51:30.915 [2024-07-22 15:04:50.537966] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e0ea0 00:51:30.915 [2024-07-22 15:04:50.538943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:30.915 [2024-07-22 15:04:50.538976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:51:31.174 [2024-07-22 15:04:50.546872] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190ff3c8 00:51:31.174 [2024-07-22 15:04:50.547798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:24768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.174 [2024-07-22 15:04:50.547829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:51:31.174 [2024-07-22 15:04:50.556620] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f7538 00:51:31.174 [2024-07-22 15:04:50.557408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.174 [2024-07-22 15:04:50.557439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:31.174 [2024-07-22 15:04:50.565152] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f6890 00:51:31.174 [2024-07-22 15:04:50.565865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.174 [2024-07-22 15:04:50.565892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:51:31.174 [2024-07-22 15:04:50.575558] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f7da8 00:51:31.174 [2024-07-22 15:04:50.577029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.174 [2024-07-22 15:04:50.577060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:51:31.174 [2024-07-22 15:04:50.582636] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f4f40 00:51:31.174 [2024-07-22 15:04:50.583615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.174 [2024-07-22 15:04:50.583643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:51:31.174 [2024-07-22 15:04:50.591501] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f8618 00:51:31.174 [2024-07-22 15:04:50.592484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.174 [2024-07-22 15:04:50.592517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:51:31.174 [2024-07-22 15:04:50.601465] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190ec840 00:51:31.174 [2024-07-22 15:04:50.602814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:18645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.174 [2024-07-22 15:04:50.602844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:51:31.175 [2024-07-22 15:04:50.610704] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e49b0 00:51:31.175 [2024-07-22 15:04:50.612167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.175 [2024-07-22 15:04:50.612195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:51:31.175 [2024-07-22 15:04:50.616911] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e3498 00:51:31.175 [2024-07-22 15:04:50.617533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.175 [2024-07-22 15:04:50.617564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:51:31.175 [2024-07-22 15:04:50.627483] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e4140 00:51:31.175 [2024-07-22 15:04:50.628727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.175 [2024-07-22 15:04:50.628760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:51:31.175 [2024-07-22 15:04:50.636416] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190eaab8 00:51:31.175 [2024-07-22 15:04:50.637711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.175 [2024-07-22 15:04:50.637737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:51:31.175 [2024-07-22 15:04:50.643871] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190ec408 00:51:31.175 [2024-07-22 15:04:50.644756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.175 [2024-07-22 15:04:50.644783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:51:31.175 [2024-07-22 15:04:50.653608] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e95a0 00:51:31.175 [2024-07-22 15:04:50.654365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.175 [2024-07-22 15:04:50.654396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:51:31.175 [2024-07-22 15:04:50.661565] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f4298 00:51:31.175 [2024-07-22 15:04:50.662431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.175 [2024-07-22 15:04:50.662459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:51:31.175 [2024-07-22 15:04:50.672293] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190eee38 00:51:31.175 [2024-07-22 15:04:50.673687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.175 [2024-07-22 15:04:50.673720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:51:31.175 [2024-07-22 15:04:50.678750] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190ea680 00:51:31.175 [2024-07-22 15:04:50.679347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.175 [2024-07-22 15:04:50.679373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:51:31.175 [2024-07-22 15:04:50.688084] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e99d8 00:51:31.175 [2024-07-22 15:04:50.688851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:17831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.175 [2024-07-22 15:04:50.688879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:51:31.175 [2024-07-22 15:04:50.697274] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190eaab8 00:51:31.175 [2024-07-22 15:04:50.698152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.175 [2024-07-22 15:04:50.698179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:51:31.175 [2024-07-22 15:04:50.705932] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e01f8 00:51:31.175 [2024-07-22 15:04:50.706448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.175 [2024-07-22 15:04:50.706487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:51:31.175 [2024-07-22 15:04:50.715110] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e88f8 00:51:31.175 [2024-07-22 15:04:50.715734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.175 [2024-07-22 15:04:50.715760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:51:31.175 [2024-07-22 15:04:50.725059] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f9b30 00:51:31.175 [2024-07-22 15:04:50.726380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.175 [2024-07-22 15:04:50.726406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:51:31.175 [2024-07-22 15:04:50.733842] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e6b70 00:51:31.175 [2024-07-22 15:04:50.735163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.175 [2024-07-22 15:04:50.735189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:51:31.175 [2024-07-22 15:04:50.740933] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190efae0 00:51:31.175 [2024-07-22 15:04:50.741575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.175 [2024-07-22 15:04:50.741601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:51:31.175 [2024-07-22 15:04:50.749692] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190ea680 00:51:31.175 [2024-07-22 15:04:50.750552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.175 [2024-07-22 15:04:50.750577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:51:31.175 [2024-07-22 15:04:50.758303] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f4298 00:51:31.175 [2024-07-22 15:04:50.758926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:18001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.175 [2024-07-22 15:04:50.758953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:51:31.175 [2024-07-22 15:04:50.767255] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e0ea0 00:51:31.175 [2024-07-22 15:04:50.768046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:15151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.175 [2024-07-22 15:04:50.768074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:51:31.175 [2024-07-22 15:04:50.776559] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e4140 00:51:31.175 [2024-07-22 15:04:50.777893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.175 [2024-07-22 15:04:50.777930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:51:31.175 [2024-07-22 15:04:50.782772] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f8a50 00:51:31.175 [2024-07-22 15:04:50.783383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:15688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.175 [2024-07-22 15:04:50.783410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:51:31.175 [2024-07-22 15:04:50.793085] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f57b0 00:51:31.175 [2024-07-22 15:04:50.794068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.175 [2024-07-22 15:04:50.794101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:51:31.175 [2024-07-22 15:04:50.801028] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190fc560 00:51:31.175 [2024-07-22 15:04:50.801884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.175 [2024-07-22 15:04:50.801915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:51:31.433 [2024-07-22 15:04:50.809867] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190eb328 00:51:31.433 [2024-07-22 15:04:50.810613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:8567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.433 [2024-07-22 15:04:50.810645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:51:31.433 [2024-07-22 15:04:50.820079] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e23b8 00:51:31.434 [2024-07-22 15:04:50.821546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:15341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.434 [2024-07-22 15:04:50.821574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:51:31.434 [2024-07-22 15:04:50.826284] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e7818 00:51:31.434 [2024-07-22 15:04:50.826921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.434 [2024-07-22 15:04:50.826947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:51:31.434 [2024-07-22 15:04:50.836523] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190fd208 00:51:31.434 [2024-07-22 15:04:50.837644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.434 [2024-07-22 15:04:50.837682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:51:31.434 [2024-07-22 15:04:50.843446] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f0bc0 00:51:31.434 [2024-07-22 15:04:50.844083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.434 [2024-07-22 15:04:50.844113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:51:31.434 [2024-07-22 15:04:50.853356] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f7538 00:51:31.434 [2024-07-22 15:04:50.854108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.434 [2024-07-22 15:04:50.854140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:51:31.434 [2024-07-22 15:04:50.861103] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190eb328 00:51:31.434 [2024-07-22 15:04:50.862148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.434 [2024-07-22 15:04:50.862176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:51:31.434 [2024-07-22 15:04:50.870290] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190fb048 00:51:31.434 [2024-07-22 15:04:50.871371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.434 [2024-07-22 15:04:50.871413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:51:31.434 [2024-07-22 15:04:50.878333] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f0ff8 00:51:31.434 [2024-07-22 15:04:50.879379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.434 [2024-07-22 15:04:50.879427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:51:31.434 [2024-07-22 15:04:50.886889] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f9b30 00:51:31.434 [2024-07-22 15:04:50.887630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.434 [2024-07-22 15:04:50.887677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:51:31.434 [2024-07-22 15:04:50.895282] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f46d0 00:51:31.434 [2024-07-22 15:04:50.896144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.434 [2024-07-22 15:04:50.896179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:51:31.434 [2024-07-22 15:04:50.903954] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f7da8 00:51:31.434 [2024-07-22 15:04:50.904593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.434 [2024-07-22 15:04:50.904631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:51:31.434 [2024-07-22 15:04:50.912300] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f7970 00:51:31.434 [2024-07-22 15:04:50.912822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.434 [2024-07-22 15:04:50.912850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:51:31.434 [2024-07-22 15:04:50.920965] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f6890 00:51:31.434 [2024-07-22 15:04:50.921692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.434 [2024-07-22 15:04:50.921718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:51:31.434 [2024-07-22 15:04:50.929468] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190ed920 00:51:31.434 [2024-07-22 15:04:50.930001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.434 [2024-07-22 15:04:50.930028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:51:31.434 [2024-07-22 15:04:50.938070] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f8a50 00:51:31.434 [2024-07-22 15:04:50.938802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.434 [2024-07-22 15:04:50.938828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:51:31.434 [2024-07-22 15:04:50.946590] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e6738 00:51:31.434 [2024-07-22 15:04:50.947067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.434 [2024-07-22 15:04:50.947109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:51:31.434 [2024-07-22 15:04:50.956272] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e6300 00:51:31.434 [2024-07-22 15:04:50.957379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.434 [2024-07-22 15:04:50.957424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:51:31.434 [2024-07-22 15:04:50.964816] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f1ca0 00:51:31.434 [2024-07-22 15:04:50.965529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.434 [2024-07-22 15:04:50.965572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:51:31.434 [2024-07-22 15:04:50.973091] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f96f8 00:51:31.434 [2024-07-22 15:04:50.973723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.434 [2024-07-22 15:04:50.973761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:51:31.434 [2024-07-22 15:04:50.980874] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e49b0 00:51:31.434 [2024-07-22 15:04:50.981584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:3693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.434 [2024-07-22 15:04:50.981611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:51:31.434 [2024-07-22 15:04:50.989772] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f7100 00:51:31.434 [2024-07-22 15:04:50.990482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:11789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.434 [2024-07-22 15:04:50.990508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:51:31.434 [2024-07-22 15:04:50.999590] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f8a50 00:51:31.434 [2024-07-22 15:04:51.000295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.434 [2024-07-22 15:04:51.000323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:51:31.434 [2024-07-22 15:04:51.007631] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e8d30 00:51:31.434 [2024-07-22 15:04:51.009096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:3284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.434 [2024-07-22 15:04:51.009123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:51:31.434 [2024-07-22 15:04:51.015130] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f1ca0 00:51:31.434 [2024-07-22 15:04:51.015729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.434 [2024-07-22 15:04:51.015754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:51:31.434 [2024-07-22 15:04:51.025453] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190fbcf0 00:51:31.434 [2024-07-22 15:04:51.026412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:15346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.434 [2024-07-22 15:04:51.026438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:51:31.434 [2024-07-22 15:04:51.035232] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190fc998 00:51:31.434 [2024-07-22 15:04:51.036659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.434 [2024-07-22 15:04:51.036691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:51:31.434 [2024-07-22 15:04:51.041456] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f8a50 00:51:31.434 [2024-07-22 15:04:51.042062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.434 [2024-07-22 15:04:51.042088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:51:31.434 [2024-07-22 15:04:51.051400] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e6fa8 00:51:31.434 [2024-07-22 15:04:51.052184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.435 [2024-07-22 15:04:51.052218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:51:31.435 [2024-07-22 15:04:51.059251] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190ddc00 00:51:31.435 [2024-07-22 15:04:51.060095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.435 [2024-07-22 15:04:51.060123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:51:31.693 [2024-07-22 15:04:51.068057] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190ee190 00:51:31.693 [2024-07-22 15:04:51.068931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.693 [2024-07-22 15:04:51.068981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:51:31.693 [2024-07-22 15:04:51.077080] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f9b30 00:51:31.693 [2024-07-22 15:04:51.077705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.693 [2024-07-22 15:04:51.077733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:51:31.693 [2024-07-22 15:04:51.087337] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f31b8 00:51:31.693 [2024-07-22 15:04:51.088786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.693 [2024-07-22 15:04:51.088814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:51:31.693 [2024-07-22 15:04:51.093594] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e99d8 00:51:31.693 [2024-07-22 15:04:51.094331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.693 [2024-07-22 15:04:51.094361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:51:31.693 [2024-07-22 15:04:51.103797] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190fbcf0 00:51:31.693 [2024-07-22 15:04:51.105000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.693 [2024-07-22 15:04:51.105047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:51:31.693 [2024-07-22 15:04:51.111630] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f1868 00:51:31.693 [2024-07-22 15:04:51.112494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.693 [2024-07-22 15:04:51.112528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:51:31.693 [2024-07-22 15:04:51.119676] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e2c28 00:51:31.693 [2024-07-22 15:04:51.120312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.693 [2024-07-22 15:04:51.120340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:51:31.693 [2024-07-22 15:04:51.129215] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e1b48 00:51:31.693 [2024-07-22 15:04:51.130181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.693 [2024-07-22 15:04:51.130208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:51:31.693 [2024-07-22 15:04:51.137344] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f4b08 00:51:31.693 [2024-07-22 15:04:51.138204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.693 [2024-07-22 15:04:51.138230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:51:31.693 [2024-07-22 15:04:51.145717] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f96f8 00:51:31.693 [2024-07-22 15:04:51.146221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:7596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.693 [2024-07-22 15:04:51.146248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:51:31.693 [2024-07-22 15:04:51.154420] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e5ec8 00:51:31.693 [2024-07-22 15:04:51.155157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.693 [2024-07-22 15:04:51.155183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:51:31.693 [2024-07-22 15:04:51.162683] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190fc998 00:51:31.693 [2024-07-22 15:04:51.163294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.693 [2024-07-22 15:04:51.163319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:51:31.693 [2024-07-22 15:04:51.172978] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e5ec8 00:51:31.693 [2024-07-22 15:04:51.174057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.693 [2024-07-22 15:04:51.174082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:51:31.693 [2024-07-22 15:04:51.181130] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f5be8 00:51:31.693 [2024-07-22 15:04:51.182263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:18537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.693 [2024-07-22 15:04:51.182288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:51:31.693 [2024-07-22 15:04:51.190532] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e5658 00:51:31.693 [2024-07-22 15:04:51.191869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.693 [2024-07-22 15:04:51.191896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:51:31.693 [2024-07-22 15:04:51.200039] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190ea248 00:51:31.693 [2024-07-22 15:04:51.201415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.693 [2024-07-22 15:04:51.201441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:51:31.693 [2024-07-22 15:04:51.209489] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190fd640 00:51:31.693 [2024-07-22 15:04:51.211032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.693 [2024-07-22 15:04:51.211056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:51:31.693 [2024-07-22 15:04:51.215953] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e95a0 00:51:31.693 [2024-07-22 15:04:51.216563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.694 [2024-07-22 15:04:51.216589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:51:31.694 [2024-07-22 15:04:51.226829] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e1710 00:51:31.694 [2024-07-22 15:04:51.227960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.694 [2024-07-22 15:04:51.227986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:51:31.694 [2024-07-22 15:04:51.235065] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e8088 00:51:31.694 [2024-07-22 15:04:51.236095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.694 [2024-07-22 15:04:51.236120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:51:31.694 [2024-07-22 15:04:51.242480] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190fb8b8 00:51:31.694 [2024-07-22 15:04:51.243099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:14127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.694 [2024-07-22 15:04:51.243125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:51:31.694 [2024-07-22 15:04:51.251596] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e4578 00:51:31.694 [2024-07-22 15:04:51.252215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.694 [2024-07-22 15:04:51.252242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:51:31.694 [2024-07-22 15:04:51.261912] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f4b08 00:51:31.694 [2024-07-22 15:04:51.263182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:15981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.694 [2024-07-22 15:04:51.263210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:51:31.694 [2024-07-22 15:04:51.270853] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e5a90 00:51:31.694 [2024-07-22 15:04:51.271707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:11633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.694 [2024-07-22 15:04:51.271733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:51:31.694 [2024-07-22 15:04:51.280232] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190ee190 00:51:31.694 [2024-07-22 15:04:51.281364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.694 [2024-07-22 15:04:51.281391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:51:31.694 [2024-07-22 15:04:51.289235] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f1868 00:51:31.694 [2024-07-22 15:04:51.290064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.694 [2024-07-22 15:04:51.290106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:51:31.694 [2024-07-22 15:04:51.297332] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f46d0 00:51:31.694 [2024-07-22 15:04:51.298543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.694 [2024-07-22 15:04:51.298570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:51:31.694 [2024-07-22 15:04:51.306655] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190ef6a8 00:51:31.694 [2024-07-22 15:04:51.307612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.694 [2024-07-22 15:04:51.307650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:51:31.694 [2024-07-22 15:04:51.316419] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190dfdc0 00:51:31.694 [2024-07-22 15:04:51.316999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.694 [2024-07-22 15:04:51.317041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:51:31.953 [2024-07-22 15:04:51.328201] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e84c0 00:51:31.953 [2024-07-22 15:04:51.329934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.953 [2024-07-22 15:04:51.329981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:51:31.953 [2024-07-22 15:04:51.336293] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190ebfd0 00:51:31.953 [2024-07-22 15:04:51.337179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.953 [2024-07-22 15:04:51.337210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:51:31.953 [2024-07-22 15:04:51.345275] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e6300 00:51:31.953 [2024-07-22 15:04:51.346033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.953 [2024-07-22 15:04:51.346061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:51:31.953 [2024-07-22 15:04:51.354678] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e3060 00:51:31.953 [2024-07-22 15:04:51.355669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.953 [2024-07-22 15:04:51.355700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:51:31.953 [2024-07-22 15:04:51.363256] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f4f40 00:51:31.953 [2024-07-22 15:04:51.364007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.953 [2024-07-22 15:04:51.364034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:51:31.953 [2024-07-22 15:04:51.374350] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e0a68 00:51:31.953 [2024-07-22 15:04:51.375799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.953 [2024-07-22 15:04:51.375824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:51:31.953 [2024-07-22 15:04:51.380532] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190df118 00:51:31.953 [2024-07-22 15:04:51.381165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.953 [2024-07-22 15:04:51.381192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:51:31.953 [2024-07-22 15:04:51.388874] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190fc998 00:51:31.953 [2024-07-22 15:04:51.389483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.953 [2024-07-22 15:04:51.389510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:51:31.953 [2024-07-22 15:04:51.398373] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e84c0 00:51:31.953 [2024-07-22 15:04:51.398895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.953 [2024-07-22 15:04:51.398924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:51:31.953 [2024-07-22 15:04:51.407155] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190fa7d8 00:51:31.953 [2024-07-22 15:04:51.407899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.953 [2024-07-22 15:04:51.407947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:51:31.953 [2024-07-22 15:04:51.417748] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f1430 00:51:31.953 [2024-07-22 15:04:51.419070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.953 [2024-07-22 15:04:51.419107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:51:31.953 [2024-07-22 15:04:51.424735] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f7970 00:51:31.953 [2024-07-22 15:04:51.425564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:12256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.953 [2024-07-22 15:04:51.425594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:51:31.953 [2024-07-22 15:04:51.434369] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190eee38 00:51:31.953 [2024-07-22 15:04:51.435384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.953 [2024-07-22 15:04:51.435412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:51:31.953 [2024-07-22 15:04:51.443094] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e99d8 00:51:31.953 [2024-07-22 15:04:51.443809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.953 [2024-07-22 15:04:51.443837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:51:31.953 [2024-07-22 15:04:51.451796] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190fe720 00:51:31.953 [2024-07-22 15:04:51.452758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.953 [2024-07-22 15:04:51.452785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:51:31.953 [2024-07-22 15:04:51.460389] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190fe720 00:51:31.953 [2024-07-22 15:04:51.461450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:16242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.953 [2024-07-22 15:04:51.461478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:51:31.953 [2024-07-22 15:04:51.469180] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190feb58 00:51:31.953 [2024-07-22 15:04:51.469966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.953 [2024-07-22 15:04:51.469993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:51:31.953 [2024-07-22 15:04:51.478001] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190fdeb0 00:51:31.953 [2024-07-22 15:04:51.478949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.953 [2024-07-22 15:04:51.478973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:51:31.953 [2024-07-22 15:04:51.487545] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190fef90 00:51:31.953 [2024-07-22 15:04:51.488853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.953 [2024-07-22 15:04:51.488879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:51:31.953 [2024-07-22 15:04:51.493986] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f6cc8 00:51:31.953 [2024-07-22 15:04:51.494670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:9811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.953 [2024-07-22 15:04:51.494702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:51:31.953 [2024-07-22 15:04:51.503512] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190ee5c8 00:51:31.953 [2024-07-22 15:04:51.504341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:3333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.953 [2024-07-22 15:04:51.504367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:51:31.953 [2024-07-22 15:04:51.512248] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f0bc0 00:51:31.953 [2024-07-22 15:04:51.513187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.953 [2024-07-22 15:04:51.513214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:51:31.953 [2024-07-22 15:04:51.521108] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190efae0 00:51:31.953 [2024-07-22 15:04:51.522284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:6276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.953 [2024-07-22 15:04:51.522315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:51:31.953 [2024-07-22 15:04:51.529683] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190df550 00:51:31.954 [2024-07-22 15:04:51.530496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.954 [2024-07-22 15:04:51.530523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:51:31.954 [2024-07-22 15:04:51.538420] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190edd58 00:51:31.954 [2024-07-22 15:04:51.539015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.954 [2024-07-22 15:04:51.539043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:51:31.954 [2024-07-22 15:04:51.547792] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190fc560 00:51:31.954 [2024-07-22 15:04:51.548556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.954 [2024-07-22 15:04:51.548583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:51:31.954 [2024-07-22 15:04:51.557001] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f7da8 00:51:31.954 [2024-07-22 15:04:51.557970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.954 [2024-07-22 15:04:51.557997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:51:31.954 [2024-07-22 15:04:51.566326] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f4f40 00:51:31.954 [2024-07-22 15:04:51.567311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:6264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.954 [2024-07-22 15:04:51.567343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:51:31.954 [2024-07-22 15:04:51.576470] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f35f0 00:51:31.954 [2024-07-22 15:04:51.577944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:9835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:31.954 [2024-07-22 15:04:51.578000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:51:32.212 [2024-07-22 15:04:51.583065] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190eaef0 00:51:32.212 [2024-07-22 15:04:51.583792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.212 [2024-07-22 15:04:51.583830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:51:32.212 [2024-07-22 15:04:51.594501] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190df550 00:51:32.212 [2024-07-22 15:04:51.595911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:11599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.212 [2024-07-22 15:04:51.595940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:51:32.212 [2024-07-22 15:04:51.600764] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e12d8 00:51:32.212 [2024-07-22 15:04:51.601394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.212 [2024-07-22 15:04:51.601422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:51:32.212 [2024-07-22 15:04:51.610856] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f7970 00:51:32.212 [2024-07-22 15:04:51.611918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:11595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.212 [2024-07-22 15:04:51.611946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:51:32.212 [2024-07-22 15:04:51.619720] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190fdeb0 00:51:32.212 [2024-07-22 15:04:51.620781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.212 [2024-07-22 15:04:51.620809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:51:32.212 [2024-07-22 15:04:51.627557] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190eee38 00:51:32.212 [2024-07-22 15:04:51.628458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.212 [2024-07-22 15:04:51.628486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:51:32.212 [2024-07-22 15:04:51.636728] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e5220 00:51:32.212 [2024-07-22 15:04:51.637755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.212 [2024-07-22 15:04:51.637782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:51:32.212 [2024-07-22 15:04:51.644749] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f7538 00:51:32.212 [2024-07-22 15:04:51.645836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.212 [2024-07-22 15:04:51.645869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:51:32.212 [2024-07-22 15:04:51.654058] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e9168 00:51:32.212 [2024-07-22 15:04:51.655099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.212 [2024-07-22 15:04:51.655134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:51:32.212 [2024-07-22 15:04:51.660907] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f1430 00:51:32.212 [2024-07-22 15:04:51.661460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.212 [2024-07-22 15:04:51.661488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:51:32.212 [2024-07-22 15:04:51.672362] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190fc560 00:51:32.212 [2024-07-22 15:04:51.673653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.212 [2024-07-22 15:04:51.673701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:51:32.212 [2024-07-22 15:04:51.678817] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190eb760 00:51:32.212 [2024-07-22 15:04:51.679392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:20026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.212 [2024-07-22 15:04:51.679423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:51:32.212 [2024-07-22 15:04:51.689288] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e2c28 00:51:32.212 [2024-07-22 15:04:51.690265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:24966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.212 [2024-07-22 15:04:51.690296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:51:32.212 [2024-07-22 15:04:51.698558] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f0bc0 00:51:32.212 [2024-07-22 15:04:51.699747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:25331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.212 [2024-07-22 15:04:51.699779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:51:32.213 [2024-07-22 15:04:51.707231] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e5ec8 00:51:32.213 [2024-07-22 15:04:51.708532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.213 [2024-07-22 15:04:51.708564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:51:32.213 [2024-07-22 15:04:51.716449] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190fbcf0 00:51:32.213 [2024-07-22 15:04:51.717462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.213 [2024-07-22 15:04:51.717495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:51:32.213 [2024-07-22 15:04:51.727417] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f6890 00:51:32.213 [2024-07-22 15:04:51.728950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.213 [2024-07-22 15:04:51.728978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:51:32.213 [2024-07-22 15:04:51.733849] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190eea00 00:51:32.213 [2024-07-22 15:04:51.734457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:6213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.213 [2024-07-22 15:04:51.734484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:51:32.213 [2024-07-22 15:04:51.744204] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190ee5c8 00:51:32.213 [2024-07-22 15:04:51.745309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:6201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.213 [2024-07-22 15:04:51.745336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:51:32.213 [2024-07-22 15:04:51.753754] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190eee38 00:51:32.213 [2024-07-22 15:04:51.755281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.213 [2024-07-22 15:04:51.755307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:51:32.213 [2024-07-22 15:04:51.760087] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190fb8b8 00:51:32.213 [2024-07-22 15:04:51.760717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.213 [2024-07-22 15:04:51.760741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:51:32.213 [2024-07-22 15:04:51.769859] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190ed0b0 00:51:32.213 [2024-07-22 15:04:51.770653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.213 [2024-07-22 15:04:51.770689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:51:32.213 [2024-07-22 15:04:51.778710] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e3498 00:51:32.213 [2024-07-22 15:04:51.779715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.213 [2024-07-22 15:04:51.779742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:51:32.213 [2024-07-22 15:04:51.787063] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f4298 00:51:32.213 [2024-07-22 15:04:51.788040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:11051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.213 [2024-07-22 15:04:51.788067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:51:32.213 [2024-07-22 15:04:51.796118] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e7818 00:51:32.213 [2024-07-22 15:04:51.797261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.213 [2024-07-22 15:04:51.797288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:51:32.213 [2024-07-22 15:04:51.805083] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e8088 00:51:32.213 [2024-07-22 15:04:51.806177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.213 [2024-07-22 15:04:51.806204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:51:32.213 [2024-07-22 15:04:51.813098] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e6738 00:51:32.213 [2024-07-22 15:04:51.814568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.213 [2024-07-22 15:04:51.814596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:51:32.213 [2024-07-22 15:04:51.820851] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190ed4e8 00:51:32.213 [2024-07-22 15:04:51.821461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.213 [2024-07-22 15:04:51.821487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:51:32.213 [2024-07-22 15:04:51.829814] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e4578 00:51:32.213 [2024-07-22 15:04:51.830432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:18658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.213 [2024-07-22 15:04:51.830466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:51:32.213 [2024-07-22 15:04:51.840110] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f7100 00:51:32.213 [2024-07-22 15:04:51.841124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.213 [2024-07-22 15:04:51.841163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:51:32.472 [2024-07-22 15:04:51.847513] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f1430 00:51:32.472 [2024-07-22 15:04:51.848159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.472 [2024-07-22 15:04:51.848190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:51:32.472 [2024-07-22 15:04:51.857165] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190fdeb0 00:51:32.472 [2024-07-22 15:04:51.858137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:3373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.472 [2024-07-22 15:04:51.858166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:51:32.472 [2024-07-22 15:04:51.866303] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f4298 00:51:32.472 [2024-07-22 15:04:51.867463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.472 [2024-07-22 15:04:51.867494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:51:32.472 [2024-07-22 15:04:51.875492] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f0bc0 00:51:32.472 [2024-07-22 15:04:51.876459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.472 [2024-07-22 15:04:51.876493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:32.472 [2024-07-22 15:04:51.884348] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190feb58 00:51:32.472 [2024-07-22 15:04:51.885395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.472 [2024-07-22 15:04:51.885425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:51:32.472 [2024-07-22 15:04:51.892566] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f6458 00:51:32.472 [2024-07-22 15:04:51.893519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.472 [2024-07-22 15:04:51.893557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:51:32.472 [2024-07-22 15:04:51.902512] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f9f68 00:51:32.472 [2024-07-22 15:04:51.903752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.472 [2024-07-22 15:04:51.903780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:51:32.472 [2024-07-22 15:04:51.911254] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e88f8 00:51:32.472 [2024-07-22 15:04:51.912480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.472 [2024-07-22 15:04:51.912507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:51:32.472 [2024-07-22 15:04:51.918772] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f7538 00:51:32.472 [2024-07-22 15:04:51.919776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.472 [2024-07-22 15:04:51.919806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:51:32.472 [2024-07-22 15:04:51.927740] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190ef270 00:51:32.472 [2024-07-22 15:04:51.928980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:9476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.472 [2024-07-22 15:04:51.929010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:51:32.472 [2024-07-22 15:04:51.936405] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e01f8 00:51:32.472 [2024-07-22 15:04:51.937470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.472 [2024-07-22 15:04:51.937501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:32.472 [2024-07-22 15:04:51.945234] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e4140 00:51:32.472 [2024-07-22 15:04:51.945953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.472 [2024-07-22 15:04:51.945983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:51:32.472 [2024-07-22 15:04:51.955132] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e3060 00:51:32.472 [2024-07-22 15:04:51.956485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.472 [2024-07-22 15:04:51.956529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:51:32.472 [2024-07-22 15:04:51.961404] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f0ff8 00:51:32.472 [2024-07-22 15:04:51.962073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.472 [2024-07-22 15:04:51.962101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:51:32.472 [2024-07-22 15:04:51.970814] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e7c50 00:51:32.472 [2024-07-22 15:04:51.971580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.472 [2024-07-22 15:04:51.971611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:51:32.472 [2024-07-22 15:04:51.979733] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190ee5c8 00:51:32.472 [2024-07-22 15:04:51.980757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:12869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.472 [2024-07-22 15:04:51.980791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:51:32.472 [2024-07-22 15:04:51.988329] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f3a28 00:51:32.472 [2024-07-22 15:04:51.989026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.472 [2024-07-22 15:04:51.989056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:51:32.472 [2024-07-22 15:04:51.996133] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190ddc00 00:51:32.472 [2024-07-22 15:04:51.997081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.472 [2024-07-22 15:04:51.997111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:51:32.472 [2024-07-22 15:04:52.004711] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190ed920 00:51:32.472 [2024-07-22 15:04:52.005383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.472 [2024-07-22 15:04:52.005410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:51:32.472 [2024-07-22 15:04:52.014297] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190eb328 00:51:32.472 [2024-07-22 15:04:52.015323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:9361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.472 [2024-07-22 15:04:52.015350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:51:32.472 [2024-07-22 15:04:52.021501] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190de038 00:51:32.472 [2024-07-22 15:04:52.022200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.472 [2024-07-22 15:04:52.022227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:51:32.472 [2024-07-22 15:04:52.030457] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e5ec8 00:51:32.472 [2024-07-22 15:04:52.031243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.472 [2024-07-22 15:04:52.031270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:51:32.472 [2024-07-22 15:04:52.039911] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e8088 00:51:32.472 [2024-07-22 15:04:52.040809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:17156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.473 [2024-07-22 15:04:52.040838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:51:32.473 [2024-07-22 15:04:52.048397] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f9b30 00:51:32.473 [2024-07-22 15:04:52.049073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.473 [2024-07-22 15:04:52.049110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:51:32.473 [2024-07-22 15:04:52.056999] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190fe2e8 00:51:32.473 [2024-07-22 15:04:52.057886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.473 [2024-07-22 15:04:52.057915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:32.473 [2024-07-22 15:04:52.064977] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190fe2e8 00:51:32.473 [2024-07-22 15:04:52.065736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:23520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.473 [2024-07-22 15:04:52.065763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:51:32.473 [2024-07-22 15:04:52.073990] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f9f68 00:51:32.473 [2024-07-22 15:04:52.075013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.473 [2024-07-22 15:04:52.075041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:51:32.473 [2024-07-22 15:04:52.082542] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190fb048 00:51:32.473 [2024-07-22 15:04:52.083431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.473 [2024-07-22 15:04:52.083458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:51:32.473 [2024-07-22 15:04:52.091213] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e01f8 00:51:32.473 [2024-07-22 15:04:52.091760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.473 [2024-07-22 15:04:52.091791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:51:32.473 [2024-07-22 15:04:52.101188] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190fb8b8 00:51:32.731 [2024-07-22 15:04:52.102310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.731 [2024-07-22 15:04:52.102358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:51:32.731 [2024-07-22 15:04:52.108508] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190fe720 00:51:32.731 [2024-07-22 15:04:52.109230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.731 [2024-07-22 15:04:52.109265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:51:32.731 [2024-07-22 15:04:52.117636] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e49b0 00:51:32.731 [2024-07-22 15:04:52.118563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.731 [2024-07-22 15:04:52.118590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:51:32.731 [2024-07-22 15:04:52.126457] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f92c0 00:51:32.731 [2024-07-22 15:04:52.127799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:14387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.731 [2024-07-22 15:04:52.127831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:51:32.731 [2024-07-22 15:04:52.135513] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f0350 00:51:32.731 [2024-07-22 15:04:52.136447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.731 [2024-07-22 15:04:52.136494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:51:32.731 [2024-07-22 15:04:52.146163] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e99d8 00:51:32.731 [2024-07-22 15:04:52.147557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.731 [2024-07-22 15:04:52.147587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:51:32.731 [2024-07-22 15:04:52.152285] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190efae0 00:51:32.731 [2024-07-22 15:04:52.152911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:14502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.731 [2024-07-22 15:04:52.152940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:51:32.731 [2024-07-22 15:04:52.160431] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e73e0 00:51:32.731 [2024-07-22 15:04:52.161071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:17547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.731 [2024-07-22 15:04:52.161098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:51:32.731 [2024-07-22 15:04:52.170014] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f6458 00:51:32.731 [2024-07-22 15:04:52.170450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.731 [2024-07-22 15:04:52.170473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:51:32.731 [2024-07-22 15:04:52.179819] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190e88f8 00:51:32.731 [2024-07-22 15:04:52.180878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.731 [2024-07-22 15:04:52.180905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:51:32.731 [2024-07-22 15:04:52.187950] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f0350 00:51:32.731 [2024-07-22 15:04:52.188901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.731 [2024-07-22 15:04:52.188929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:51:32.731 [2024-07-22 15:04:52.196419] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f9f68 00:51:32.731 [2024-07-22 15:04:52.197184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.731 [2024-07-22 15:04:52.197213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:51:32.731 [2024-07-22 15:04:52.204938] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f9b30 00:51:32.731 [2024-07-22 15:04:52.205859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.731 [2024-07-22 15:04:52.205885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:51:32.731 [2024-07-22 15:04:52.213184] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f4298 00:51:32.731 [2024-07-22 15:04:52.214126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.731 [2024-07-22 15:04:52.214154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:51:32.731 [2024-07-22 15:04:52.222676] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190fb8b8 00:51:32.731 [2024-07-22 15:04:52.223786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.731 [2024-07-22 15:04:52.223812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:51:32.731 [2024-07-22 15:04:52.230284] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190eb760 00:51:32.731 [2024-07-22 15:04:52.231717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:25285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.731 [2024-07-22 15:04:52.231744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:51:32.731 [2024-07-22 15:04:52.239063] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f99f0) with pdu=0x2000190f81e0 00:51:32.731 [2024-07-22 15:04:52.240115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:51:32.731 [2024-07-22 15:04:52.240143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:51:32.731 00:51:32.731 Latency(us) 00:51:32.731 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:51:32.731 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:51:32.731 nvme0n1 : 2.00 28802.93 112.51 0.00 0.00 4438.76 1795.80 15453.90 00:51:32.731 =================================================================================================================== 00:51:32.731 Total : 28802.93 112.51 0.00 0.00 4438.76 1795.80 15453.90 00:51:32.731 0 00:51:32.731 15:04:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:51:32.731 15:04:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:51:32.731 15:04:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:51:32.731 15:04:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:51:32.731 | .driver_specific 00:51:32.731 | .nvme_error 00:51:32.731 | .status_code 00:51:32.731 | .command_transient_transport_error' 00:51:32.989 15:04:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 226 > 0 )) 00:51:32.989 15:04:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 111560 00:51:32.989 15:04:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 111560 ']' 00:51:32.989 15:04:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 111560 00:51:32.989 15:04:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:51:32.989 15:04:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:51:32.989 15:04:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 111560 00:51:32.989 killing process with pid 111560 00:51:32.990 Received shutdown signal, test time was about 2.000000 seconds 00:51:32.990 00:51:32.990 Latency(us) 00:51:32.990 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:51:32.990 =================================================================================================================== 00:51:32.990 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:51:32.990 15:04:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:51:32.990 15:04:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:51:32.990 15:04:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 111560' 00:51:32.990 15:04:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 111560 00:51:32.990 15:04:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 111560 00:51:33.248 15:04:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:51:33.248 15:04:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:51:33.248 15:04:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:51:33.248 15:04:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:51:33.248 15:04:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:51:33.248 15:04:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=111645 00:51:33.248 15:04:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 111645 /var/tmp/bperf.sock 00:51:33.248 15:04:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:51:33.248 15:04:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 111645 ']' 00:51:33.248 15:04:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:51:33.248 15:04:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:51:33.248 15:04:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:51:33.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:51:33.248 15:04:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:51:33.248 15:04:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:51:33.506 [2024-07-22 15:04:52.882323] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:51:33.506 [2024-07-22 15:04:52.882461] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6I/O size of 131072 is greater than zero copy threshold (65536). 00:51:33.506 Zero copy mechanism will not be used. 00:51:33.506 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111645 ] 00:51:33.506 [2024-07-22 15:04:53.019927] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:33.506 [2024-07-22 15:04:53.099045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:51:34.439 15:04:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:51:34.439 15:04:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:51:34.439 15:04:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:51:34.439 15:04:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:51:34.439 15:04:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:51:34.439 15:04:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:34.439 15:04:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:51:34.439 15:04:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:34.439 15:04:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:51:34.439 15:04:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:51:34.699 nvme0n1 00:51:34.699 15:04:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:51:34.699 15:04:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:34.699 15:04:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:51:34.699 15:04:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:34.699 15:04:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:51:34.699 15:04:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:51:34.699 I/O size of 131072 is greater than zero copy threshold (65536). 00:51:34.699 Zero copy mechanism will not be used. 00:51:34.699 Running I/O for 2 seconds... 00:51:34.699 [2024-07-22 15:04:54.299250] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.699 [2024-07-22 15:04:54.299917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.699 [2024-07-22 15:04:54.300029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:34.699 [2024-07-22 15:04:54.304665] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.699 [2024-07-22 15:04:54.305290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.699 [2024-07-22 15:04:54.305381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:34.699 [2024-07-22 15:04:54.310005] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.699 [2024-07-22 15:04:54.310593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.699 [2024-07-22 15:04:54.310624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:34.699 [2024-07-22 15:04:54.315169] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.699 [2024-07-22 15:04:54.315710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.699 [2024-07-22 15:04:54.315740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:34.699 [2024-07-22 15:04:54.320254] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.699 [2024-07-22 15:04:54.320780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.699 [2024-07-22 15:04:54.320819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:34.699 [2024-07-22 15:04:54.325453] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.699 [2024-07-22 15:04:54.326071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.699 [2024-07-22 15:04:54.326137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:34.959 [2024-07-22 15:04:54.330799] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.959 [2024-07-22 15:04:54.331340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.959 [2024-07-22 15:04:54.331374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:34.959 [2024-07-22 15:04:54.335957] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.959 [2024-07-22 15:04:54.336477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.959 [2024-07-22 15:04:54.336512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:34.959 [2024-07-22 15:04:54.341102] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.959 [2024-07-22 15:04:54.341602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.960 [2024-07-22 15:04:54.341634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:34.960 [2024-07-22 15:04:54.346226] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.960 [2024-07-22 15:04:54.346758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.960 [2024-07-22 15:04:54.346787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:34.960 [2024-07-22 15:04:54.351340] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.960 [2024-07-22 15:04:54.351884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.960 [2024-07-22 15:04:54.351907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:34.960 [2024-07-22 15:04:54.356609] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.960 [2024-07-22 15:04:54.357153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.960 [2024-07-22 15:04:54.357182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:34.960 [2024-07-22 15:04:54.361736] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.960 [2024-07-22 15:04:54.362249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.960 [2024-07-22 15:04:54.362279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:34.960 [2024-07-22 15:04:54.366777] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.960 [2024-07-22 15:04:54.367266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.960 [2024-07-22 15:04:54.367299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:34.960 [2024-07-22 15:04:54.371815] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.960 [2024-07-22 15:04:54.372293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.960 [2024-07-22 15:04:54.372318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:34.960 [2024-07-22 15:04:54.376902] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.960 [2024-07-22 15:04:54.377419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.960 [2024-07-22 15:04:54.377450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:34.960 [2024-07-22 15:04:54.382131] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.960 [2024-07-22 15:04:54.382632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.960 [2024-07-22 15:04:54.382663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:34.960 [2024-07-22 15:04:54.387300] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.960 [2024-07-22 15:04:54.387826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.960 [2024-07-22 15:04:54.387853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:34.960 [2024-07-22 15:04:54.392433] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.960 [2024-07-22 15:04:54.393054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.960 [2024-07-22 15:04:54.393080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:34.960 [2024-07-22 15:04:54.398503] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.960 [2024-07-22 15:04:54.399061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.960 [2024-07-22 15:04:54.399085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:34.960 [2024-07-22 15:04:54.403984] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.960 [2024-07-22 15:04:54.404506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.960 [2024-07-22 15:04:54.404536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:34.960 [2024-07-22 15:04:54.409350] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.960 [2024-07-22 15:04:54.409878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.960 [2024-07-22 15:04:54.409905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:34.960 [2024-07-22 15:04:54.414671] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.960 [2024-07-22 15:04:54.415191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.960 [2024-07-22 15:04:54.415232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:34.960 [2024-07-22 15:04:54.419970] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.960 [2024-07-22 15:04:54.420487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.960 [2024-07-22 15:04:54.420517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:34.960 [2024-07-22 15:04:54.425323] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.960 [2024-07-22 15:04:54.425850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.960 [2024-07-22 15:04:54.425886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:34.960 [2024-07-22 15:04:54.430851] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.960 [2024-07-22 15:04:54.431364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.960 [2024-07-22 15:04:54.431390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:34.960 [2024-07-22 15:04:54.436022] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.960 [2024-07-22 15:04:54.436525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.960 [2024-07-22 15:04:54.436562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:34.960 [2024-07-22 15:04:54.441226] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.960 [2024-07-22 15:04:54.441722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.960 [2024-07-22 15:04:54.441764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:34.960 [2024-07-22 15:04:54.446583] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.960 [2024-07-22 15:04:54.447147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.960 [2024-07-22 15:04:54.447176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:34.960 [2024-07-22 15:04:54.451748] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.960 [2024-07-22 15:04:54.452227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.960 [2024-07-22 15:04:54.452266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:34.960 [2024-07-22 15:04:54.456846] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.960 [2024-07-22 15:04:54.457326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.960 [2024-07-22 15:04:54.457355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:34.960 [2024-07-22 15:04:54.461814] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.960 [2024-07-22 15:04:54.462322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.960 [2024-07-22 15:04:54.462351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:34.960 [2024-07-22 15:04:54.466898] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.960 [2024-07-22 15:04:54.467411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.960 [2024-07-22 15:04:54.467440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:34.960 [2024-07-22 15:04:54.471898] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.961 [2024-07-22 15:04:54.472402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.961 [2024-07-22 15:04:54.472436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:34.961 [2024-07-22 15:04:54.476909] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.961 [2024-07-22 15:04:54.477423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.961 [2024-07-22 15:04:54.477451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:34.961 [2024-07-22 15:04:54.482004] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.961 [2024-07-22 15:04:54.482497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.961 [2024-07-22 15:04:54.482526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:34.961 [2024-07-22 15:04:54.487105] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.961 [2024-07-22 15:04:54.487612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.961 [2024-07-22 15:04:54.487652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:34.961 [2024-07-22 15:04:54.492279] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.961 [2024-07-22 15:04:54.492834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.961 [2024-07-22 15:04:54.492871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:34.961 [2024-07-22 15:04:54.497536] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.961 [2024-07-22 15:04:54.498070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.961 [2024-07-22 15:04:54.498104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:34.961 [2024-07-22 15:04:54.502853] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.961 [2024-07-22 15:04:54.503371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.961 [2024-07-22 15:04:54.503403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:34.961 [2024-07-22 15:04:54.508102] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.961 [2024-07-22 15:04:54.508618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.961 [2024-07-22 15:04:54.508658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:34.961 [2024-07-22 15:04:54.513186] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.961 [2024-07-22 15:04:54.513715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.961 [2024-07-22 15:04:54.513743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:34.961 [2024-07-22 15:04:54.518376] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.961 [2024-07-22 15:04:54.518911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.961 [2024-07-22 15:04:54.518940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:34.961 [2024-07-22 15:04:54.523601] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.961 [2024-07-22 15:04:54.524161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.961 [2024-07-22 15:04:54.524192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:34.961 [2024-07-22 15:04:54.528775] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.961 [2024-07-22 15:04:54.529286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.961 [2024-07-22 15:04:54.529316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:34.961 [2024-07-22 15:04:54.533888] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.961 [2024-07-22 15:04:54.534393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.961 [2024-07-22 15:04:54.534424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:34.961 [2024-07-22 15:04:54.538905] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.961 [2024-07-22 15:04:54.539385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.961 [2024-07-22 15:04:54.539414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:34.961 [2024-07-22 15:04:54.544021] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.961 [2024-07-22 15:04:54.544561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.961 [2024-07-22 15:04:54.544590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:34.961 [2024-07-22 15:04:54.549171] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.961 [2024-07-22 15:04:54.549695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.961 [2024-07-22 15:04:54.549723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:34.961 [2024-07-22 15:04:54.554297] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.961 [2024-07-22 15:04:54.554817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.961 [2024-07-22 15:04:54.554846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:34.961 [2024-07-22 15:04:54.559742] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.961 [2024-07-22 15:04:54.560256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.961 [2024-07-22 15:04:54.560279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:34.961 [2024-07-22 15:04:54.565224] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.961 [2024-07-22 15:04:54.565760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.961 [2024-07-22 15:04:54.565788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:34.961 [2024-07-22 15:04:54.570542] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.961 [2024-07-22 15:04:54.571071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.961 [2024-07-22 15:04:54.571100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:34.961 [2024-07-22 15:04:54.575861] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.961 [2024-07-22 15:04:54.576395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.961 [2024-07-22 15:04:54.576425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:34.961 [2024-07-22 15:04:54.581137] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.961 [2024-07-22 15:04:54.581650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.961 [2024-07-22 15:04:54.581687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:34.961 [2024-07-22 15:04:54.586513] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:34.961 [2024-07-22 15:04:54.587071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:34.961 [2024-07-22 15:04:54.587104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.223 [2024-07-22 15:04:54.592124] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.223 [2024-07-22 15:04:54.592650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.223 [2024-07-22 15:04:54.592696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.223 [2024-07-22 15:04:54.597722] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.223 [2024-07-22 15:04:54.598274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.223 [2024-07-22 15:04:54.598308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.223 [2024-07-22 15:04:54.603155] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.223 [2024-07-22 15:04:54.603680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.223 [2024-07-22 15:04:54.603709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.223 [2024-07-22 15:04:54.608835] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.223 [2024-07-22 15:04:54.609363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.223 [2024-07-22 15:04:54.609393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.223 [2024-07-22 15:04:54.614208] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.223 [2024-07-22 15:04:54.614732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.223 [2024-07-22 15:04:54.614756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.223 [2024-07-22 15:04:54.619545] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.223 [2024-07-22 15:04:54.620113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.223 [2024-07-22 15:04:54.620154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.223 [2024-07-22 15:04:54.624997] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.223 [2024-07-22 15:04:54.625541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.223 [2024-07-22 15:04:54.625575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.223 [2024-07-22 15:04:54.630459] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.223 [2024-07-22 15:04:54.630982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.223 [2024-07-22 15:04:54.631018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.223 [2024-07-22 15:04:54.635771] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.223 [2024-07-22 15:04:54.636282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.223 [2024-07-22 15:04:54.636305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.223 [2024-07-22 15:04:54.641245] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.223 [2024-07-22 15:04:54.641778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.223 [2024-07-22 15:04:54.641809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.223 [2024-07-22 15:04:54.646710] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.223 [2024-07-22 15:04:54.647307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.223 [2024-07-22 15:04:54.647335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.223 [2024-07-22 15:04:54.652183] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.223 [2024-07-22 15:04:54.652722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.223 [2024-07-22 15:04:54.652745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.223 [2024-07-22 15:04:54.657414] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.223 [2024-07-22 15:04:54.657953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.223 [2024-07-22 15:04:54.657995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.223 [2024-07-22 15:04:54.662752] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.223 [2024-07-22 15:04:54.663276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.223 [2024-07-22 15:04:54.663310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.223 [2024-07-22 15:04:54.667886] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.223 [2024-07-22 15:04:54.668393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.223 [2024-07-22 15:04:54.668416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.223 [2024-07-22 15:04:54.672920] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.223 [2024-07-22 15:04:54.673412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.223 [2024-07-22 15:04:54.673441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.223 [2024-07-22 15:04:54.679590] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.223 [2024-07-22 15:04:54.680713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.224 [2024-07-22 15:04:54.680843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.224 [2024-07-22 15:04:54.690145] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.224 [2024-07-22 15:04:54.690295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.224 [2024-07-22 15:04:54.690340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.224 [2024-07-22 15:04:54.696402] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.224 [2024-07-22 15:04:54.696646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.224 [2024-07-22 15:04:54.696725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.224 [2024-07-22 15:04:54.701927] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.224 [2024-07-22 15:04:54.702201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.224 [2024-07-22 15:04:54.702230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.224 [2024-07-22 15:04:54.706359] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.224 [2024-07-22 15:04:54.706658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.224 [2024-07-22 15:04:54.706704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.224 [2024-07-22 15:04:54.710607] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.224 [2024-07-22 15:04:54.710990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.224 [2024-07-22 15:04:54.711022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.224 [2024-07-22 15:04:54.713890] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.224 [2024-07-22 15:04:54.714272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.224 [2024-07-22 15:04:54.714311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.224 [2024-07-22 15:04:54.717171] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.224 [2024-07-22 15:04:54.717517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.224 [2024-07-22 15:04:54.717544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.224 [2024-07-22 15:04:54.720398] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.224 [2024-07-22 15:04:54.720800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.224 [2024-07-22 15:04:54.720823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.224 [2024-07-22 15:04:54.723335] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.224 [2024-07-22 15:04:54.723587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.224 [2024-07-22 15:04:54.723604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.224 [2024-07-22 15:04:54.726228] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.224 [2024-07-22 15:04:54.726470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.224 [2024-07-22 15:04:54.726488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.224 [2024-07-22 15:04:54.729056] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.224 [2024-07-22 15:04:54.729219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.224 [2024-07-22 15:04:54.729237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.224 [2024-07-22 15:04:54.731938] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.224 [2024-07-22 15:04:54.732049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.224 [2024-07-22 15:04:54.732066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.224 [2024-07-22 15:04:54.734782] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.224 [2024-07-22 15:04:54.734923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.224 [2024-07-22 15:04:54.734941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.224 [2024-07-22 15:04:54.737575] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.224 [2024-07-22 15:04:54.737733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.224 [2024-07-22 15:04:54.737751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.224 [2024-07-22 15:04:54.740422] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.224 [2024-07-22 15:04:54.740537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.224 [2024-07-22 15:04:54.740555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.224 [2024-07-22 15:04:54.743343] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.224 [2024-07-22 15:04:54.743631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.224 [2024-07-22 15:04:54.743650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.224 [2024-07-22 15:04:54.746157] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.224 [2024-07-22 15:04:54.746378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.224 [2024-07-22 15:04:54.746396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.224 [2024-07-22 15:04:54.749083] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.224 [2024-07-22 15:04:54.749290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.224 [2024-07-22 15:04:54.749308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.224 [2024-07-22 15:04:54.751881] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.224 [2024-07-22 15:04:54.752114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.224 [2024-07-22 15:04:54.752132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.224 [2024-07-22 15:04:54.754771] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.224 [2024-07-22 15:04:54.755025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.224 [2024-07-22 15:04:54.755042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.225 [2024-07-22 15:04:54.757646] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.225 [2024-07-22 15:04:54.757910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.225 [2024-07-22 15:04:54.757940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.225 [2024-07-22 15:04:54.760500] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.225 [2024-07-22 15:04:54.760767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.225 [2024-07-22 15:04:54.760786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.225 [2024-07-22 15:04:54.763440] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.225 [2024-07-22 15:04:54.763680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.225 [2024-07-22 15:04:54.763698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.225 [2024-07-22 15:04:54.766264] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.225 [2024-07-22 15:04:54.766562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.225 [2024-07-22 15:04:54.766597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.225 [2024-07-22 15:04:54.769158] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.225 [2024-07-22 15:04:54.769310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.225 [2024-07-22 15:04:54.769328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.225 [2024-07-22 15:04:54.772093] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.225 [2024-07-22 15:04:54.772267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.225 [2024-07-22 15:04:54.772284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.225 [2024-07-22 15:04:54.775009] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.225 [2024-07-22 15:04:54.775147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.225 [2024-07-22 15:04:54.775235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.225 [2024-07-22 15:04:54.777892] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.225 [2024-07-22 15:04:54.778034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.225 [2024-07-22 15:04:54.778191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.225 [2024-07-22 15:04:54.780940] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.225 [2024-07-22 15:04:54.781253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.225 [2024-07-22 15:04:54.781334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.225 [2024-07-22 15:04:54.783988] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.225 [2024-07-22 15:04:54.784232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.225 [2024-07-22 15:04:54.784250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.225 [2024-07-22 15:04:54.786846] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.225 [2024-07-22 15:04:54.787079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.225 [2024-07-22 15:04:54.787103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.225 [2024-07-22 15:04:54.789683] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.225 [2024-07-22 15:04:54.789875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.225 [2024-07-22 15:04:54.789899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.225 [2024-07-22 15:04:54.792525] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.225 [2024-07-22 15:04:54.792745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.225 [2024-07-22 15:04:54.792764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.225 [2024-07-22 15:04:54.795427] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.225 [2024-07-22 15:04:54.795520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.225 [2024-07-22 15:04:54.795538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.225 [2024-07-22 15:04:54.798278] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.225 [2024-07-22 15:04:54.798414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.225 [2024-07-22 15:04:54.798433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.225 [2024-07-22 15:04:54.801131] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.225 [2024-07-22 15:04:54.801239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.225 [2024-07-22 15:04:54.801259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.225 [2024-07-22 15:04:54.804006] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.225 [2024-07-22 15:04:54.804234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.225 [2024-07-22 15:04:54.804252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.225 [2024-07-22 15:04:54.806903] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.225 [2024-07-22 15:04:54.807123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.225 [2024-07-22 15:04:54.807141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.225 [2024-07-22 15:04:54.809723] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.225 [2024-07-22 15:04:54.809921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.225 [2024-07-22 15:04:54.809946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.225 [2024-07-22 15:04:54.812494] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.225 [2024-07-22 15:04:54.812649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.225 [2024-07-22 15:04:54.812668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.225 [2024-07-22 15:04:54.815379] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.225 [2024-07-22 15:04:54.815553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.225 [2024-07-22 15:04:54.815571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.225 [2024-07-22 15:04:54.818265] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.225 [2024-07-22 15:04:54.818430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.225 [2024-07-22 15:04:54.818455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.225 [2024-07-22 15:04:54.821120] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.225 [2024-07-22 15:04:54.821313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.225 [2024-07-22 15:04:54.821332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.225 [2024-07-22 15:04:54.823948] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.225 [2024-07-22 15:04:54.824115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.225 [2024-07-22 15:04:54.824132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.226 [2024-07-22 15:04:54.826741] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.226 [2024-07-22 15:04:54.826901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.226 [2024-07-22 15:04:54.826919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.226 [2024-07-22 15:04:54.829576] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.226 [2024-07-22 15:04:54.829713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.226 [2024-07-22 15:04:54.829732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.226 [2024-07-22 15:04:54.832468] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.226 [2024-07-22 15:04:54.832667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.226 [2024-07-22 15:04:54.832696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.226 [2024-07-22 15:04:54.835484] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.226 [2024-07-22 15:04:54.835688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.226 [2024-07-22 15:04:54.835720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.226 [2024-07-22 15:04:54.838311] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.226 [2024-07-22 15:04:54.838459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.226 [2024-07-22 15:04:54.838484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.226 [2024-07-22 15:04:54.841196] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.226 [2024-07-22 15:04:54.841333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.226 [2024-07-22 15:04:54.841351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.226 [2024-07-22 15:04:54.844068] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.226 [2024-07-22 15:04:54.844130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.226 [2024-07-22 15:04:54.844148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.226 [2024-07-22 15:04:54.847010] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.226 [2024-07-22 15:04:54.847139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.226 [2024-07-22 15:04:54.847159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.226 [2024-07-22 15:04:54.849788] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.226 [2024-07-22 15:04:54.849996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.226 [2024-07-22 15:04:54.850022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.488 [2024-07-22 15:04:54.852784] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.488 [2024-07-22 15:04:54.853013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.488 [2024-07-22 15:04:54.853038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.488 [2024-07-22 15:04:54.855654] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.488 [2024-07-22 15:04:54.855850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.488 [2024-07-22 15:04:54.855867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.488 [2024-07-22 15:04:54.858464] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.488 [2024-07-22 15:04:54.858664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.488 [2024-07-22 15:04:54.858702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.488 [2024-07-22 15:04:54.861289] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.488 [2024-07-22 15:04:54.861577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.488 [2024-07-22 15:04:54.861602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.488 [2024-07-22 15:04:54.864299] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.488 [2024-07-22 15:04:54.864482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.488 [2024-07-22 15:04:54.864500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.488 [2024-07-22 15:04:54.867200] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.488 [2024-07-22 15:04:54.867405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.488 [2024-07-22 15:04:54.867423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.488 [2024-07-22 15:04:54.870129] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.488 [2024-07-22 15:04:54.870212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.488 [2024-07-22 15:04:54.870230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.488 [2024-07-22 15:04:54.872975] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.488 [2024-07-22 15:04:54.873052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.488 [2024-07-22 15:04:54.873070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.488 [2024-07-22 15:04:54.875831] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.488 [2024-07-22 15:04:54.875904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.488 [2024-07-22 15:04:54.875922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.488 [2024-07-22 15:04:54.878730] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.488 [2024-07-22 15:04:54.878862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.488 [2024-07-22 15:04:54.878904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.488 [2024-07-22 15:04:54.881536] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.488 [2024-07-22 15:04:54.881768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.488 [2024-07-22 15:04:54.881793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.488 [2024-07-22 15:04:54.884399] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.488 [2024-07-22 15:04:54.884563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.488 [2024-07-22 15:04:54.884580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.488 [2024-07-22 15:04:54.887252] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.488 [2024-07-22 15:04:54.887448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.488 [2024-07-22 15:04:54.887467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.488 [2024-07-22 15:04:54.890201] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.488 [2024-07-22 15:04:54.890335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.488 [2024-07-22 15:04:54.890377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.488 [2024-07-22 15:04:54.893099] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.488 [2024-07-22 15:04:54.893247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.488 [2024-07-22 15:04:54.893265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.489 [2024-07-22 15:04:54.895921] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.489 [2024-07-22 15:04:54.896059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.489 [2024-07-22 15:04:54.896077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.489 [2024-07-22 15:04:54.898763] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.489 [2024-07-22 15:04:54.898961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.489 [2024-07-22 15:04:54.898985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.489 [2024-07-22 15:04:54.901614] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.489 [2024-07-22 15:04:54.901814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.489 [2024-07-22 15:04:54.901838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.489 [2024-07-22 15:04:54.904453] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.489 [2024-07-22 15:04:54.904563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.489 [2024-07-22 15:04:54.904581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.489 [2024-07-22 15:04:54.907404] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.489 [2024-07-22 15:04:54.907518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.489 [2024-07-22 15:04:54.907536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.489 [2024-07-22 15:04:54.910274] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.489 [2024-07-22 15:04:54.910449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.489 [2024-07-22 15:04:54.910475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.489 [2024-07-22 15:04:54.913129] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.489 [2024-07-22 15:04:54.913254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.489 [2024-07-22 15:04:54.913273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.489 [2024-07-22 15:04:54.915968] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.489 [2024-07-22 15:04:54.916054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.489 [2024-07-22 15:04:54.916072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.489 [2024-07-22 15:04:54.918838] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.489 [2024-07-22 15:04:54.918962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.489 [2024-07-22 15:04:54.918981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.489 [2024-07-22 15:04:54.921628] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.489 [2024-07-22 15:04:54.921869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.489 [2024-07-22 15:04:54.921887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.489 [2024-07-22 15:04:54.924459] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.489 [2024-07-22 15:04:54.924758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.489 [2024-07-22 15:04:54.924777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.489 [2024-07-22 15:04:54.927295] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.489 [2024-07-22 15:04:54.927546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.489 [2024-07-22 15:04:54.927563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.489 [2024-07-22 15:04:54.930142] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.489 [2024-07-22 15:04:54.930335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.489 [2024-07-22 15:04:54.930359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.489 [2024-07-22 15:04:54.932995] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.489 [2024-07-22 15:04:54.933149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.489 [2024-07-22 15:04:54.933168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.489 [2024-07-22 15:04:54.935822] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.489 [2024-07-22 15:04:54.935944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.489 [2024-07-22 15:04:54.935962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.489 [2024-07-22 15:04:54.938721] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.489 [2024-07-22 15:04:54.938843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.489 [2024-07-22 15:04:54.938862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.489 [2024-07-22 15:04:54.941573] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.489 [2024-07-22 15:04:54.941809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.489 [2024-07-22 15:04:54.941828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.489 [2024-07-22 15:04:54.944424] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.489 [2024-07-22 15:04:54.944494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.489 [2024-07-22 15:04:54.944512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.489 [2024-07-22 15:04:54.947348] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.489 [2024-07-22 15:04:54.947554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.489 [2024-07-22 15:04:54.947572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.489 [2024-07-22 15:04:54.950304] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.489 [2024-07-22 15:04:54.950481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.489 [2024-07-22 15:04:54.950506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.489 [2024-07-22 15:04:54.953129] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.489 [2024-07-22 15:04:54.953290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.489 [2024-07-22 15:04:54.953308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.489 [2024-07-22 15:04:54.955957] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.489 [2024-07-22 15:04:54.956078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.489 [2024-07-22 15:04:54.956096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.489 [2024-07-22 15:04:54.958863] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.489 [2024-07-22 15:04:54.959057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.489 [2024-07-22 15:04:54.959075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.489 [2024-07-22 15:04:54.961767] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.490 [2024-07-22 15:04:54.961953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.490 [2024-07-22 15:04:54.961978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.490 [2024-07-22 15:04:54.964592] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.490 [2024-07-22 15:04:54.964761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.490 [2024-07-22 15:04:54.964779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.490 [2024-07-22 15:04:54.967420] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.490 [2024-07-22 15:04:54.967595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.490 [2024-07-22 15:04:54.967612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.490 [2024-07-22 15:04:54.970349] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.490 [2024-07-22 15:04:54.970481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.490 [2024-07-22 15:04:54.970506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.490 [2024-07-22 15:04:54.973208] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.490 [2024-07-22 15:04:54.973351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.490 [2024-07-22 15:04:54.973369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.490 [2024-07-22 15:04:54.976070] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.490 [2024-07-22 15:04:54.976307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.490 [2024-07-22 15:04:54.976325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.490 [2024-07-22 15:04:54.978944] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.490 [2024-07-22 15:04:54.979085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.490 [2024-07-22 15:04:54.979103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.490 [2024-07-22 15:04:54.981776] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.490 [2024-07-22 15:04:54.981932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.490 [2024-07-22 15:04:54.981949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.490 [2024-07-22 15:04:54.984585] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.490 [2024-07-22 15:04:54.984781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.490 [2024-07-22 15:04:54.984805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.490 [2024-07-22 15:04:54.987426] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.490 [2024-07-22 15:04:54.987615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.490 [2024-07-22 15:04:54.987632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.490 [2024-07-22 15:04:54.990215] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.490 [2024-07-22 15:04:54.990382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.490 [2024-07-22 15:04:54.990399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.490 [2024-07-22 15:04:54.993063] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.490 [2024-07-22 15:04:54.993184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.490 [2024-07-22 15:04:54.993202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.490 [2024-07-22 15:04:54.995847] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.490 [2024-07-22 15:04:54.996007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.490 [2024-07-22 15:04:54.996024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.490 [2024-07-22 15:04:54.998690] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.490 [2024-07-22 15:04:54.998898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.490 [2024-07-22 15:04:54.998916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.490 [2024-07-22 15:04:55.001480] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.490 [2024-07-22 15:04:55.001655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.490 [2024-07-22 15:04:55.001689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.490 [2024-07-22 15:04:55.004345] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.490 [2024-07-22 15:04:55.004464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.490 [2024-07-22 15:04:55.004482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.490 [2024-07-22 15:04:55.007189] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.490 [2024-07-22 15:04:55.007332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.490 [2024-07-22 15:04:55.007349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.490 [2024-07-22 15:04:55.010026] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.490 [2024-07-22 15:04:55.010160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.490 [2024-07-22 15:04:55.010177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.490 [2024-07-22 15:04:55.012898] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.490 [2024-07-22 15:04:55.013022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.490 [2024-07-22 15:04:55.013040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.490 [2024-07-22 15:04:55.015721] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.490 [2024-07-22 15:04:55.015832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.490 [2024-07-22 15:04:55.015850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.490 [2024-07-22 15:04:55.018602] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.490 [2024-07-22 15:04:55.018736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.490 [2024-07-22 15:04:55.018755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.490 [2024-07-22 15:04:55.021501] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.490 [2024-07-22 15:04:55.021630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.490 [2024-07-22 15:04:55.021648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.490 [2024-07-22 15:04:55.024235] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.490 [2024-07-22 15:04:55.024424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.491 [2024-07-22 15:04:55.024442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.491 [2024-07-22 15:04:55.027068] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.491 [2024-07-22 15:04:55.027167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.491 [2024-07-22 15:04:55.027184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.491 [2024-07-22 15:04:55.030130] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.491 [2024-07-22 15:04:55.030294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.491 [2024-07-22 15:04:55.030312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.491 [2024-07-22 15:04:55.032966] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.491 [2024-07-22 15:04:55.033123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.491 [2024-07-22 15:04:55.033152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.491 [2024-07-22 15:04:55.035761] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.491 [2024-07-22 15:04:55.035948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.491 [2024-07-22 15:04:55.035972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.491 [2024-07-22 15:04:55.038546] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.491 [2024-07-22 15:04:55.038722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.491 [2024-07-22 15:04:55.038749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.491 [2024-07-22 15:04:55.041522] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.491 [2024-07-22 15:04:55.041656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.491 [2024-07-22 15:04:55.041698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.491 [2024-07-22 15:04:55.044546] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.491 [2024-07-22 15:04:55.044712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.491 [2024-07-22 15:04:55.044740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.491 [2024-07-22 15:04:55.047854] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.491 [2024-07-22 15:04:55.047974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.491 [2024-07-22 15:04:55.048011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.491 [2024-07-22 15:04:55.050761] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.491 [2024-07-22 15:04:55.050886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.491 [2024-07-22 15:04:55.050922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.491 [2024-07-22 15:04:55.053603] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.491 [2024-07-22 15:04:55.053838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.491 [2024-07-22 15:04:55.053866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.491 [2024-07-22 15:04:55.056420] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.491 [2024-07-22 15:04:55.056582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.491 [2024-07-22 15:04:55.056610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.491 [2024-07-22 15:04:55.059296] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.491 [2024-07-22 15:04:55.059466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.491 [2024-07-22 15:04:55.059484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.491 [2024-07-22 15:04:55.062134] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.491 [2024-07-22 15:04:55.062308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.491 [2024-07-22 15:04:55.062336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.491 [2024-07-22 15:04:55.065001] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.491 [2024-07-22 15:04:55.065139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.491 [2024-07-22 15:04:55.065157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.491 [2024-07-22 15:04:55.067765] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.491 [2024-07-22 15:04:55.067971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.491 [2024-07-22 15:04:55.067989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.491 [2024-07-22 15:04:55.070815] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.491 [2024-07-22 15:04:55.070975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.491 [2024-07-22 15:04:55.071004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.491 [2024-07-22 15:04:55.073677] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.491 [2024-07-22 15:04:55.073879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.491 [2024-07-22 15:04:55.073909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.491 [2024-07-22 15:04:55.076443] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.491 [2024-07-22 15:04:55.076684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.491 [2024-07-22 15:04:55.076702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.491 [2024-07-22 15:04:55.079351] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.491 [2024-07-22 15:04:55.079549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.491 [2024-07-22 15:04:55.079566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.491 [2024-07-22 15:04:55.082148] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.491 [2024-07-22 15:04:55.082377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.491 [2024-07-22 15:04:55.082406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.491 [2024-07-22 15:04:55.084982] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.491 [2024-07-22 15:04:55.085115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.491 [2024-07-22 15:04:55.085132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.491 [2024-07-22 15:04:55.087799] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.491 [2024-07-22 15:04:55.087921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.491 [2024-07-22 15:04:55.087939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.492 [2024-07-22 15:04:55.090632] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.492 [2024-07-22 15:04:55.090806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.492 [2024-07-22 15:04:55.090835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.492 [2024-07-22 15:04:55.093547] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.492 [2024-07-22 15:04:55.093685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.492 [2024-07-22 15:04:55.093702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.492 [2024-07-22 15:04:55.096371] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.492 [2024-07-22 15:04:55.096534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.492 [2024-07-22 15:04:55.096551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.492 [2024-07-22 15:04:55.099212] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.492 [2024-07-22 15:04:55.099340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.492 [2024-07-22 15:04:55.099358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.492 [2024-07-22 15:04:55.102029] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.492 [2024-07-22 15:04:55.102186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.492 [2024-07-22 15:04:55.102210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.492 [2024-07-22 15:04:55.104906] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.492 [2024-07-22 15:04:55.105000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.492 [2024-07-22 15:04:55.105017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.492 [2024-07-22 15:04:55.107717] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.492 [2024-07-22 15:04:55.107881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.492 [2024-07-22 15:04:55.107899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.492 [2024-07-22 15:04:55.110629] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.492 [2024-07-22 15:04:55.110725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.492 [2024-07-22 15:04:55.110745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.492 [2024-07-22 15:04:55.113578] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.492 [2024-07-22 15:04:55.113731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.492 [2024-07-22 15:04:55.113749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.754 [2024-07-22 15:04:55.116382] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.754 [2024-07-22 15:04:55.116501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.754 [2024-07-22 15:04:55.116518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.754 [2024-07-22 15:04:55.119329] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.754 [2024-07-22 15:04:55.119468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.754 [2024-07-22 15:04:55.119485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.754 [2024-07-22 15:04:55.122254] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.754 [2024-07-22 15:04:55.122440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.754 [2024-07-22 15:04:55.122475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.754 [2024-07-22 15:04:55.125155] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.754 [2024-07-22 15:04:55.125293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.754 [2024-07-22 15:04:55.125311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.754 [2024-07-22 15:04:55.127976] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.754 [2024-07-22 15:04:55.128107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.754 [2024-07-22 15:04:55.128125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.754 [2024-07-22 15:04:55.130849] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.754 [2024-07-22 15:04:55.131011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.754 [2024-07-22 15:04:55.131036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.754 [2024-07-22 15:04:55.133771] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.754 [2024-07-22 15:04:55.133845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.754 [2024-07-22 15:04:55.133864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.754 [2024-07-22 15:04:55.136558] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.754 [2024-07-22 15:04:55.136707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.754 [2024-07-22 15:04:55.136727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.754 [2024-07-22 15:04:55.139468] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.754 [2024-07-22 15:04:55.139559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.754 [2024-07-22 15:04:55.139578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.754 [2024-07-22 15:04:55.142408] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.754 [2024-07-22 15:04:55.142497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.754 [2024-07-22 15:04:55.142516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.754 [2024-07-22 15:04:55.145260] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.754 [2024-07-22 15:04:55.145333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.754 [2024-07-22 15:04:55.145352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.754 [2024-07-22 15:04:55.148212] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.754 [2024-07-22 15:04:55.148327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.754 [2024-07-22 15:04:55.148344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.754 [2024-07-22 15:04:55.151154] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.754 [2024-07-22 15:04:55.151271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.754 [2024-07-22 15:04:55.151289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.755 [2024-07-22 15:04:55.154117] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.755 [2024-07-22 15:04:55.154197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.755 [2024-07-22 15:04:55.154216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.755 [2024-07-22 15:04:55.157022] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.755 [2024-07-22 15:04:55.157095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.755 [2024-07-22 15:04:55.157114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.755 [2024-07-22 15:04:55.159929] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.755 [2024-07-22 15:04:55.160086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.755 [2024-07-22 15:04:55.160104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.755 [2024-07-22 15:04:55.162826] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.755 [2024-07-22 15:04:55.162995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.755 [2024-07-22 15:04:55.163013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.755 [2024-07-22 15:04:55.165752] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.755 [2024-07-22 15:04:55.165929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.755 [2024-07-22 15:04:55.165947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.755 [2024-07-22 15:04:55.168604] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.755 [2024-07-22 15:04:55.168798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.755 [2024-07-22 15:04:55.168825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.755 [2024-07-22 15:04:55.171513] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.755 [2024-07-22 15:04:55.171730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.755 [2024-07-22 15:04:55.171749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.755 [2024-07-22 15:04:55.174474] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.755 [2024-07-22 15:04:55.174648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.755 [2024-07-22 15:04:55.174665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.755 [2024-07-22 15:04:55.177396] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.755 [2024-07-22 15:04:55.177576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.755 [2024-07-22 15:04:55.177594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.755 [2024-07-22 15:04:55.180221] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.755 [2024-07-22 15:04:55.180423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.755 [2024-07-22 15:04:55.180440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.755 [2024-07-22 15:04:55.183146] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.755 [2024-07-22 15:04:55.183280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.755 [2024-07-22 15:04:55.183298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.755 [2024-07-22 15:04:55.185975] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.755 [2024-07-22 15:04:55.186119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.755 [2024-07-22 15:04:55.186144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.755 [2024-07-22 15:04:55.188746] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.755 [2024-07-22 15:04:55.188968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.755 [2024-07-22 15:04:55.188987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.755 [2024-07-22 15:04:55.191642] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.755 [2024-07-22 15:04:55.191846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.755 [2024-07-22 15:04:55.191864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.755 [2024-07-22 15:04:55.194549] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.755 [2024-07-22 15:04:55.194664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.755 [2024-07-22 15:04:55.194682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.755 [2024-07-22 15:04:55.197494] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.755 [2024-07-22 15:04:55.197573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.755 [2024-07-22 15:04:55.197591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.755 [2024-07-22 15:04:55.200299] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.755 [2024-07-22 15:04:55.200479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.755 [2024-07-22 15:04:55.200497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.755 [2024-07-22 15:04:55.203244] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.755 [2024-07-22 15:04:55.203395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.755 [2024-07-22 15:04:55.203412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.755 [2024-07-22 15:04:55.206137] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.755 [2024-07-22 15:04:55.206274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.755 [2024-07-22 15:04:55.206299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.755 [2024-07-22 15:04:55.209068] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.755 [2024-07-22 15:04:55.209170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.755 [2024-07-22 15:04:55.209189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.755 [2024-07-22 15:04:55.211957] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.755 [2024-07-22 15:04:55.212083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.755 [2024-07-22 15:04:55.212102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.756 [2024-07-22 15:04:55.214934] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.756 [2024-07-22 15:04:55.215063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.756 [2024-07-22 15:04:55.215081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.756 [2024-07-22 15:04:55.217835] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.756 [2024-07-22 15:04:55.217980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.756 [2024-07-22 15:04:55.217998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.756 [2024-07-22 15:04:55.220593] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.756 [2024-07-22 15:04:55.220699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.756 [2024-07-22 15:04:55.220717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.756 [2024-07-22 15:04:55.223392] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.756 [2024-07-22 15:04:55.223483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.756 [2024-07-22 15:04:55.223501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.756 [2024-07-22 15:04:55.226234] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.756 [2024-07-22 15:04:55.226385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.756 [2024-07-22 15:04:55.226408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.756 [2024-07-22 15:04:55.229103] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.756 [2024-07-22 15:04:55.229320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.756 [2024-07-22 15:04:55.229345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.756 [2024-07-22 15:04:55.231924] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.756 [2024-07-22 15:04:55.232067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.756 [2024-07-22 15:04:55.232085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.756 [2024-07-22 15:04:55.234740] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.756 [2024-07-22 15:04:55.234923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.756 [2024-07-22 15:04:55.234951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.756 [2024-07-22 15:04:55.237566] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.756 [2024-07-22 15:04:55.237751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.756 [2024-07-22 15:04:55.237769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.756 [2024-07-22 15:04:55.240391] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.756 [2024-07-22 15:04:55.240525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.756 [2024-07-22 15:04:55.240543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.756 [2024-07-22 15:04:55.243252] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.756 [2024-07-22 15:04:55.243463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.756 [2024-07-22 15:04:55.243481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.756 [2024-07-22 15:04:55.246101] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.756 [2024-07-22 15:04:55.246224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.756 [2024-07-22 15:04:55.246242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.756 [2024-07-22 15:04:55.249030] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.756 [2024-07-22 15:04:55.249102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.756 [2024-07-22 15:04:55.249121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.756 [2024-07-22 15:04:55.251906] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.756 [2024-07-22 15:04:55.252011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.756 [2024-07-22 15:04:55.252029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.756 [2024-07-22 15:04:55.254668] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.756 [2024-07-22 15:04:55.254838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.756 [2024-07-22 15:04:55.254855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.756 [2024-07-22 15:04:55.257543] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.756 [2024-07-22 15:04:55.257626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.756 [2024-07-22 15:04:55.257644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.756 [2024-07-22 15:04:55.260349] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.756 [2024-07-22 15:04:55.260504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.756 [2024-07-22 15:04:55.260523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.756 [2024-07-22 15:04:55.263297] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.756 [2024-07-22 15:04:55.263502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.756 [2024-07-22 15:04:55.263519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.756 [2024-07-22 15:04:55.266166] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.756 [2024-07-22 15:04:55.266292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.756 [2024-07-22 15:04:55.266310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.756 [2024-07-22 15:04:55.269054] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.756 [2024-07-22 15:04:55.269205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.756 [2024-07-22 15:04:55.269222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.756 [2024-07-22 15:04:55.271833] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.756 [2024-07-22 15:04:55.272034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.756 [2024-07-22 15:04:55.272051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.757 [2024-07-22 15:04:55.274662] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.757 [2024-07-22 15:04:55.274830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.757 [2024-07-22 15:04:55.274854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.757 [2024-07-22 15:04:55.277601] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.757 [2024-07-22 15:04:55.277737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.757 [2024-07-22 15:04:55.277755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.757 [2024-07-22 15:04:55.280408] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.757 [2024-07-22 15:04:55.280561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.757 [2024-07-22 15:04:55.280580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.757 [2024-07-22 15:04:55.283227] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.757 [2024-07-22 15:04:55.283418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.757 [2024-07-22 15:04:55.283435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.757 [2024-07-22 15:04:55.286143] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.757 [2024-07-22 15:04:55.286349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.757 [2024-07-22 15:04:55.286379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.757 [2024-07-22 15:04:55.289002] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.757 [2024-07-22 15:04:55.289160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.757 [2024-07-22 15:04:55.289178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.757 [2024-07-22 15:04:55.291845] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.757 [2024-07-22 15:04:55.292055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.757 [2024-07-22 15:04:55.292072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.757 [2024-07-22 15:04:55.294771] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.757 [2024-07-22 15:04:55.295059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.757 [2024-07-22 15:04:55.295090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.757 [2024-07-22 15:04:55.297769] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.757 [2024-07-22 15:04:55.297903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.757 [2024-07-22 15:04:55.297933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.757 [2024-07-22 15:04:55.300702] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.757 [2024-07-22 15:04:55.300801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.757 [2024-07-22 15:04:55.300825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.757 [2024-07-22 15:04:55.303568] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.757 [2024-07-22 15:04:55.303664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.757 [2024-07-22 15:04:55.303702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.757 [2024-07-22 15:04:55.306507] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.757 [2024-07-22 15:04:55.306601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.757 [2024-07-22 15:04:55.306621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.757 [2024-07-22 15:04:55.309415] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.757 [2024-07-22 15:04:55.309523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.757 [2024-07-22 15:04:55.309557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.757 [2024-07-22 15:04:55.312314] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.757 [2024-07-22 15:04:55.312572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.757 [2024-07-22 15:04:55.312590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.757 [2024-07-22 15:04:55.315221] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.757 [2024-07-22 15:04:55.315400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.757 [2024-07-22 15:04:55.315419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.757 [2024-07-22 15:04:55.318211] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.757 [2024-07-22 15:04:55.318441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.757 [2024-07-22 15:04:55.318475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.757 [2024-07-22 15:04:55.321154] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.757 [2024-07-22 15:04:55.321220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.757 [2024-07-22 15:04:55.321239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.757 [2024-07-22 15:04:55.324024] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.757 [2024-07-22 15:04:55.324235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.757 [2024-07-22 15:04:55.324261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.757 [2024-07-22 15:04:55.326929] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.757 [2024-07-22 15:04:55.327090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.757 [2024-07-22 15:04:55.327116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.757 [2024-07-22 15:04:55.329855] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.757 [2024-07-22 15:04:55.330114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.757 [2024-07-22 15:04:55.330140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.757 [2024-07-22 15:04:55.332793] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.757 [2024-07-22 15:04:55.332951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.757 [2024-07-22 15:04:55.332993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.757 [2024-07-22 15:04:55.335697] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.757 [2024-07-22 15:04:55.335862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.757 [2024-07-22 15:04:55.335887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.758 [2024-07-22 15:04:55.338635] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.758 [2024-07-22 15:04:55.338833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.758 [2024-07-22 15:04:55.338872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.758 [2024-07-22 15:04:55.341622] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.758 [2024-07-22 15:04:55.341842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.758 [2024-07-22 15:04:55.341875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.758 [2024-07-22 15:04:55.344494] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.758 [2024-07-22 15:04:55.344594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.758 [2024-07-22 15:04:55.344627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.758 [2024-07-22 15:04:55.347327] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.758 [2024-07-22 15:04:55.347438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.758 [2024-07-22 15:04:55.347462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.758 [2024-07-22 15:04:55.350130] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.758 [2024-07-22 15:04:55.350269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.758 [2024-07-22 15:04:55.350294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.758 [2024-07-22 15:04:55.352975] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.758 [2024-07-22 15:04:55.353105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.758 [2024-07-22 15:04:55.353129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.758 [2024-07-22 15:04:55.355838] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.758 [2024-07-22 15:04:55.355974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.758 [2024-07-22 15:04:55.355999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.758 [2024-07-22 15:04:55.358768] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.758 [2024-07-22 15:04:55.358915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.758 [2024-07-22 15:04:55.358939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.758 [2024-07-22 15:04:55.361675] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.758 [2024-07-22 15:04:55.361917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.758 [2024-07-22 15:04:55.361947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.758 [2024-07-22 15:04:55.364466] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.758 [2024-07-22 15:04:55.364680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.758 [2024-07-22 15:04:55.364702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.758 [2024-07-22 15:04:55.367354] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.758 [2024-07-22 15:04:55.367554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.758 [2024-07-22 15:04:55.367579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.758 [2024-07-22 15:04:55.370207] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.758 [2024-07-22 15:04:55.370378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.758 [2024-07-22 15:04:55.370414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:35.758 [2024-07-22 15:04:55.373028] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.758 [2024-07-22 15:04:55.373201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.758 [2024-07-22 15:04:55.373241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:35.758 [2024-07-22 15:04:55.375931] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.758 [2024-07-22 15:04:55.376081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.758 [2024-07-22 15:04:55.376108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:35.758 [2024-07-22 15:04:55.378961] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.758 [2024-07-22 15:04:55.379094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.758 [2024-07-22 15:04:55.379119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:35.758 [2024-07-22 15:04:55.381934] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:35.758 [2024-07-22 15:04:55.382066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:35.758 [2024-07-22 15:04:55.382091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.019 [2024-07-22 15:04:55.384844] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.019 [2024-07-22 15:04:55.384946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.019 [2024-07-22 15:04:55.384971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.019 [2024-07-22 15:04:55.387738] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.019 [2024-07-22 15:04:55.387926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.019 [2024-07-22 15:04:55.387950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.019 [2024-07-22 15:04:55.390772] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.019 [2024-07-22 15:04:55.390983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.019 [2024-07-22 15:04:55.391009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.019 [2024-07-22 15:04:55.393781] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.019 [2024-07-22 15:04:55.393965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.019 [2024-07-22 15:04:55.393998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.019 [2024-07-22 15:04:55.396821] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.019 [2024-07-22 15:04:55.396930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.019 [2024-07-22 15:04:55.396955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.019 [2024-07-22 15:04:55.399815] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.019 [2024-07-22 15:04:55.399920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.019 [2024-07-22 15:04:55.399944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.019 [2024-07-22 15:04:55.402760] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.020 [2024-07-22 15:04:55.402913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.020 [2024-07-22 15:04:55.402938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.020 [2024-07-22 15:04:55.405797] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.020 [2024-07-22 15:04:55.406044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.020 [2024-07-22 15:04:55.406078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.020 [2024-07-22 15:04:55.408986] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.020 [2024-07-22 15:04:55.409112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.020 [2024-07-22 15:04:55.409138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.020 [2024-07-22 15:04:55.412098] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.020 [2024-07-22 15:04:55.412206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.020 [2024-07-22 15:04:55.412251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.020 [2024-07-22 15:04:55.415541] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.020 [2024-07-22 15:04:55.415711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.020 [2024-07-22 15:04:55.415747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.020 [2024-07-22 15:04:55.418692] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.020 [2024-07-22 15:04:55.418806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.020 [2024-07-22 15:04:55.418830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.020 [2024-07-22 15:04:55.421923] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.020 [2024-07-22 15:04:55.421993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.020 [2024-07-22 15:04:55.422019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.020 [2024-07-22 15:04:55.425286] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.020 [2024-07-22 15:04:55.425470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.020 [2024-07-22 15:04:55.425489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.020 [2024-07-22 15:04:55.428383] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.020 [2024-07-22 15:04:55.428626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.020 [2024-07-22 15:04:55.428663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.020 [2024-07-22 15:04:55.431721] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.020 [2024-07-22 15:04:55.431884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.020 [2024-07-22 15:04:55.431903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.020 [2024-07-22 15:04:55.435071] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.020 [2024-07-22 15:04:55.435255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.020 [2024-07-22 15:04:55.435275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.020 [2024-07-22 15:04:55.438255] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.020 [2024-07-22 15:04:55.438416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.020 [2024-07-22 15:04:55.438450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.020 [2024-07-22 15:04:55.441385] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.020 [2024-07-22 15:04:55.441528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.020 [2024-07-22 15:04:55.441547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.020 [2024-07-22 15:04:55.444471] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.020 [2024-07-22 15:04:55.444645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.020 [2024-07-22 15:04:55.444665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.020 [2024-07-22 15:04:55.447665] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.020 [2024-07-22 15:04:55.447819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.020 [2024-07-22 15:04:55.447836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.020 [2024-07-22 15:04:55.451043] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.020 [2024-07-22 15:04:55.451171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.020 [2024-07-22 15:04:55.451191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.020 [2024-07-22 15:04:55.454281] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.020 [2024-07-22 15:04:55.454474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.020 [2024-07-22 15:04:55.454492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.020 [2024-07-22 15:04:55.457569] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.020 [2024-07-22 15:04:55.457762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.020 [2024-07-22 15:04:55.457783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.020 [2024-07-22 15:04:55.460751] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.020 [2024-07-22 15:04:55.460919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.020 [2024-07-22 15:04:55.460939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.020 [2024-07-22 15:04:55.463935] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.020 [2024-07-22 15:04:55.464087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.020 [2024-07-22 15:04:55.464104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.020 [2024-07-22 15:04:55.467149] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.020 [2024-07-22 15:04:55.467333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.020 [2024-07-22 15:04:55.467352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.020 [2024-07-22 15:04:55.470509] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.020 [2024-07-22 15:04:55.470617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.020 [2024-07-22 15:04:55.470637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.020 [2024-07-22 15:04:55.473669] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.020 [2024-07-22 15:04:55.473899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.020 [2024-07-22 15:04:55.473918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.020 [2024-07-22 15:04:55.476857] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.020 [2024-07-22 15:04:55.477062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.020 [2024-07-22 15:04:55.477093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.020 [2024-07-22 15:04:55.479987] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.020 [2024-07-22 15:04:55.480172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.021 [2024-07-22 15:04:55.480190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.021 [2024-07-22 15:04:55.483117] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.021 [2024-07-22 15:04:55.483295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.021 [2024-07-22 15:04:55.483313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.021 [2024-07-22 15:04:55.486312] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.021 [2024-07-22 15:04:55.486519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.021 [2024-07-22 15:04:55.486539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.021 [2024-07-22 15:04:55.489656] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.021 [2024-07-22 15:04:55.489887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.021 [2024-07-22 15:04:55.489917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.021 [2024-07-22 15:04:55.492813] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.021 [2024-07-22 15:04:55.493007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.021 [2024-07-22 15:04:55.493026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.021 [2024-07-22 15:04:55.495931] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.021 [2024-07-22 15:04:55.496115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.021 [2024-07-22 15:04:55.496133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.021 [2024-07-22 15:04:55.499094] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.021 [2024-07-22 15:04:55.499296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.021 [2024-07-22 15:04:55.499316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.021 [2024-07-22 15:04:55.502696] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.021 [2024-07-22 15:04:55.502903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.021 [2024-07-22 15:04:55.502921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.021 [2024-07-22 15:04:55.506194] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.021 [2024-07-22 15:04:55.506383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.021 [2024-07-22 15:04:55.506410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.021 [2024-07-22 15:04:55.509965] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.021 [2024-07-22 15:04:55.510189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.021 [2024-07-22 15:04:55.510210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.021 [2024-07-22 15:04:55.513786] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.021 [2024-07-22 15:04:55.514010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.021 [2024-07-22 15:04:55.514029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.021 [2024-07-22 15:04:55.517043] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.021 [2024-07-22 15:04:55.517242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.021 [2024-07-22 15:04:55.517275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.021 [2024-07-22 15:04:55.520366] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.021 [2024-07-22 15:04:55.520542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.021 [2024-07-22 15:04:55.520560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.021 [2024-07-22 15:04:55.524203] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.021 [2024-07-22 15:04:55.524393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.021 [2024-07-22 15:04:55.524412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.021 [2024-07-22 15:04:55.527501] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.021 [2024-07-22 15:04:55.527697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.021 [2024-07-22 15:04:55.527717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.021 [2024-07-22 15:04:55.531009] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.021 [2024-07-22 15:04:55.531192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.021 [2024-07-22 15:04:55.531211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.021 [2024-07-22 15:04:55.534218] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.021 [2024-07-22 15:04:55.534399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.021 [2024-07-22 15:04:55.534431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.021 [2024-07-22 15:04:55.537591] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.021 [2024-07-22 15:04:55.537822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.021 [2024-07-22 15:04:55.537851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.021 [2024-07-22 15:04:55.540712] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.021 [2024-07-22 15:04:55.540911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.021 [2024-07-22 15:04:55.540940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.021 [2024-07-22 15:04:55.543952] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.021 [2024-07-22 15:04:55.544136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.021 [2024-07-22 15:04:55.544166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.021 [2024-07-22 15:04:55.547609] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.021 [2024-07-22 15:04:55.547839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.021 [2024-07-22 15:04:55.547858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.021 [2024-07-22 15:04:55.551389] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.021 [2024-07-22 15:04:55.551587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.021 [2024-07-22 15:04:55.551606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.021 [2024-07-22 15:04:55.555069] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.021 [2024-07-22 15:04:55.555250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.021 [2024-07-22 15:04:55.555269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.021 [2024-07-22 15:04:55.558239] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.021 [2024-07-22 15:04:55.558436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.021 [2024-07-22 15:04:55.558454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.021 [2024-07-22 15:04:55.562017] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.021 [2024-07-22 15:04:55.562227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.021 [2024-07-22 15:04:55.562247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.022 [2024-07-22 15:04:55.565254] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.022 [2024-07-22 15:04:55.565458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.022 [2024-07-22 15:04:55.565477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.022 [2024-07-22 15:04:55.568699] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.022 [2024-07-22 15:04:55.568886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.022 [2024-07-22 15:04:55.568904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.022 [2024-07-22 15:04:55.572077] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.022 [2024-07-22 15:04:55.572265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.022 [2024-07-22 15:04:55.572283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.022 [2024-07-22 15:04:55.575342] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.022 [2024-07-22 15:04:55.575544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.022 [2024-07-22 15:04:55.575562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.022 [2024-07-22 15:04:55.578663] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.022 [2024-07-22 15:04:55.578851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.022 [2024-07-22 15:04:55.578869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.022 [2024-07-22 15:04:55.581651] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.022 [2024-07-22 15:04:55.581850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.022 [2024-07-22 15:04:55.581879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.022 [2024-07-22 15:04:55.584813] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.022 [2024-07-22 15:04:55.585040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.022 [2024-07-22 15:04:55.585068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.022 [2024-07-22 15:04:55.587920] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.022 [2024-07-22 15:04:55.588110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.022 [2024-07-22 15:04:55.588128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.022 [2024-07-22 15:04:55.591067] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.022 [2024-07-22 15:04:55.591247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.022 [2024-07-22 15:04:55.591265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.022 [2024-07-22 15:04:55.594255] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.022 [2024-07-22 15:04:55.594433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.022 [2024-07-22 15:04:55.594451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.022 [2024-07-22 15:04:55.597385] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.022 [2024-07-22 15:04:55.597563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.022 [2024-07-22 15:04:55.597581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.022 [2024-07-22 15:04:55.600527] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.022 [2024-07-22 15:04:55.600811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.022 [2024-07-22 15:04:55.600829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.022 [2024-07-22 15:04:55.603710] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.022 [2024-07-22 15:04:55.603899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.022 [2024-07-22 15:04:55.603917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.022 [2024-07-22 15:04:55.606841] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.022 [2024-07-22 15:04:55.607019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.022 [2024-07-22 15:04:55.607038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.022 [2024-07-22 15:04:55.609889] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.022 [2024-07-22 15:04:55.610068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.022 [2024-07-22 15:04:55.610086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.022 [2024-07-22 15:04:55.613050] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.022 [2024-07-22 15:04:55.613236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.022 [2024-07-22 15:04:55.613254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.022 [2024-07-22 15:04:55.616111] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.022 [2024-07-22 15:04:55.616300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.022 [2024-07-22 15:04:55.616318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.022 [2024-07-22 15:04:55.619279] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.022 [2024-07-22 15:04:55.619496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.022 [2024-07-22 15:04:55.619514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.022 [2024-07-22 15:04:55.622390] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.022 [2024-07-22 15:04:55.622571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.022 [2024-07-22 15:04:55.622601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.022 [2024-07-22 15:04:55.625428] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.022 [2024-07-22 15:04:55.625609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.022 [2024-07-22 15:04:55.625627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.022 [2024-07-22 15:04:55.628489] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.022 [2024-07-22 15:04:55.628715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.022 [2024-07-22 15:04:55.628735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.022 [2024-07-22 15:04:55.631458] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.022 [2024-07-22 15:04:55.631669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.022 [2024-07-22 15:04:55.631701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.022 [2024-07-22 15:04:55.634487] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.022 [2024-07-22 15:04:55.634669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.022 [2024-07-22 15:04:55.634708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.022 [2024-07-22 15:04:55.637645] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.022 [2024-07-22 15:04:55.637861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.023 [2024-07-22 15:04:55.637886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.023 [2024-07-22 15:04:55.640564] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.023 [2024-07-22 15:04:55.640803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.023 [2024-07-22 15:04:55.640822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.023 [2024-07-22 15:04:55.643502] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.023 [2024-07-22 15:04:55.643707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.023 [2024-07-22 15:04:55.643725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.023 [2024-07-22 15:04:55.646416] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.023 [2024-07-22 15:04:55.646592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.023 [2024-07-22 15:04:55.646621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.284 [2024-07-22 15:04:55.649340] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.284 [2024-07-22 15:04:55.649522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.284 [2024-07-22 15:04:55.649540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.284 [2024-07-22 15:04:55.652277] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.284 [2024-07-22 15:04:55.652509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.284 [2024-07-22 15:04:55.652547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.284 [2024-07-22 15:04:55.655301] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.284 [2024-07-22 15:04:55.655483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.284 [2024-07-22 15:04:55.655519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.284 [2024-07-22 15:04:55.658283] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.284 [2024-07-22 15:04:55.658453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.284 [2024-07-22 15:04:55.658472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.284 [2024-07-22 15:04:55.661231] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.284 [2024-07-22 15:04:55.661439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.284 [2024-07-22 15:04:55.661476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.284 [2024-07-22 15:04:55.664304] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.284 [2024-07-22 15:04:55.664502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.284 [2024-07-22 15:04:55.664532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.284 [2024-07-22 15:04:55.667326] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.284 [2024-07-22 15:04:55.667518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.284 [2024-07-22 15:04:55.667537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.284 [2024-07-22 15:04:55.670491] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.284 [2024-07-22 15:04:55.670550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.284 [2024-07-22 15:04:55.670570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.284 [2024-07-22 15:04:55.673871] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.284 [2024-07-22 15:04:55.674032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.284 [2024-07-22 15:04:55.674061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.284 [2024-07-22 15:04:55.677099] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.284 [2024-07-22 15:04:55.677220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.284 [2024-07-22 15:04:55.677243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.284 [2024-07-22 15:04:55.680178] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.284 [2024-07-22 15:04:55.680253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.284 [2024-07-22 15:04:55.680272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.284 [2024-07-22 15:04:55.683147] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.284 [2024-07-22 15:04:55.683250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.284 [2024-07-22 15:04:55.683269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.284 [2024-07-22 15:04:55.686151] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.284 [2024-07-22 15:04:55.686252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.284 [2024-07-22 15:04:55.686271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.284 [2024-07-22 15:04:55.689105] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.284 [2024-07-22 15:04:55.689225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.284 [2024-07-22 15:04:55.689246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.284 [2024-07-22 15:04:55.692053] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.284 [2024-07-22 15:04:55.692114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.284 [2024-07-22 15:04:55.692132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.284 [2024-07-22 15:04:55.694956] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.284 [2024-07-22 15:04:55.695116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.285 [2024-07-22 15:04:55.695145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.285 [2024-07-22 15:04:55.697922] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.285 [2024-07-22 15:04:55.698095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.285 [2024-07-22 15:04:55.698123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.285 [2024-07-22 15:04:55.700862] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.285 [2024-07-22 15:04:55.701019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.285 [2024-07-22 15:04:55.701036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.285 [2024-07-22 15:04:55.703770] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.285 [2024-07-22 15:04:55.703950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.285 [2024-07-22 15:04:55.703967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.285 [2024-07-22 15:04:55.706675] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.285 [2024-07-22 15:04:55.706830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.285 [2024-07-22 15:04:55.706848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.285 [2024-07-22 15:04:55.709706] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.285 [2024-07-22 15:04:55.709815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.285 [2024-07-22 15:04:55.709834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.285 [2024-07-22 15:04:55.712605] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.285 [2024-07-22 15:04:55.712787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.285 [2024-07-22 15:04:55.712804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.285 [2024-07-22 15:04:55.715551] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.285 [2024-07-22 15:04:55.715657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.285 [2024-07-22 15:04:55.715677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.285 [2024-07-22 15:04:55.718523] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.285 [2024-07-22 15:04:55.718620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.285 [2024-07-22 15:04:55.718638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.285 [2024-07-22 15:04:55.721451] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.285 [2024-07-22 15:04:55.721572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.285 [2024-07-22 15:04:55.721590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.285 [2024-07-22 15:04:55.724323] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.285 [2024-07-22 15:04:55.724428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.285 [2024-07-22 15:04:55.724447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.285 [2024-07-22 15:04:55.727284] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.285 [2024-07-22 15:04:55.727384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.285 [2024-07-22 15:04:55.727403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.285 [2024-07-22 15:04:55.730248] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.285 [2024-07-22 15:04:55.730349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.285 [2024-07-22 15:04:55.730368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.285 [2024-07-22 15:04:55.733186] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.285 [2024-07-22 15:04:55.733322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.285 [2024-07-22 15:04:55.733340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.285 [2024-07-22 15:04:55.736103] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.285 [2024-07-22 15:04:55.736259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.285 [2024-07-22 15:04:55.736277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.285 [2024-07-22 15:04:55.739075] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.285 [2024-07-22 15:04:55.739226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.285 [2024-07-22 15:04:55.739255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.285 [2024-07-22 15:04:55.741960] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.285 [2024-07-22 15:04:55.742103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.285 [2024-07-22 15:04:55.742120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.285 [2024-07-22 15:04:55.744977] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.285 [2024-07-22 15:04:55.745038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.285 [2024-07-22 15:04:55.745057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.285 [2024-07-22 15:04:55.747920] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.285 [2024-07-22 15:04:55.747991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.285 [2024-07-22 15:04:55.748010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.285 [2024-07-22 15:04:55.750832] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.285 [2024-07-22 15:04:55.750989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.285 [2024-07-22 15:04:55.751006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.285 [2024-07-22 15:04:55.753746] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.285 [2024-07-22 15:04:55.753895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.285 [2024-07-22 15:04:55.753919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.285 [2024-07-22 15:04:55.756584] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.285 [2024-07-22 15:04:55.756754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.285 [2024-07-22 15:04:55.756772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.285 [2024-07-22 15:04:55.759521] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.285 [2024-07-22 15:04:55.759644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.285 [2024-07-22 15:04:55.759663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.285 [2024-07-22 15:04:55.762476] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.285 [2024-07-22 15:04:55.762589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.285 [2024-07-22 15:04:55.762608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.285 [2024-07-22 15:04:55.765435] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.285 [2024-07-22 15:04:55.765527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.285 [2024-07-22 15:04:55.765545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.285 [2024-07-22 15:04:55.768376] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.286 [2024-07-22 15:04:55.768482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.286 [2024-07-22 15:04:55.768501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.286 [2024-07-22 15:04:55.771326] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.286 [2024-07-22 15:04:55.771445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.286 [2024-07-22 15:04:55.771463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.286 [2024-07-22 15:04:55.774283] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.286 [2024-07-22 15:04:55.774435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.286 [2024-07-22 15:04:55.774453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.286 [2024-07-22 15:04:55.777218] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.286 [2024-07-22 15:04:55.777332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.286 [2024-07-22 15:04:55.777350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.286 [2024-07-22 15:04:55.780206] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.286 [2024-07-22 15:04:55.780329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.286 [2024-07-22 15:04:55.780348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.286 [2024-07-22 15:04:55.783221] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.286 [2024-07-22 15:04:55.783376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.286 [2024-07-22 15:04:55.783395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.286 [2024-07-22 15:04:55.786201] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.286 [2024-07-22 15:04:55.786352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.286 [2024-07-22 15:04:55.786370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.286 [2024-07-22 15:04:55.789120] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.286 [2024-07-22 15:04:55.789236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.286 [2024-07-22 15:04:55.789255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.286 [2024-07-22 15:04:55.792079] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.286 [2024-07-22 15:04:55.792247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.286 [2024-07-22 15:04:55.792264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.286 [2024-07-22 15:04:55.794975] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.286 [2024-07-22 15:04:55.795128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.286 [2024-07-22 15:04:55.795146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.286 [2024-07-22 15:04:55.797960] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.286 [2024-07-22 15:04:55.798096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.286 [2024-07-22 15:04:55.798115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.286 [2024-07-22 15:04:55.800916] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.286 [2024-07-22 15:04:55.801052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.286 [2024-07-22 15:04:55.801070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.286 [2024-07-22 15:04:55.803873] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.286 [2024-07-22 15:04:55.804041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.286 [2024-07-22 15:04:55.804059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.286 [2024-07-22 15:04:55.806850] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.286 [2024-07-22 15:04:55.806946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.286 [2024-07-22 15:04:55.806964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.286 [2024-07-22 15:04:55.809784] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.286 [2024-07-22 15:04:55.809880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.286 [2024-07-22 15:04:55.809899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.286 [2024-07-22 15:04:55.812782] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.286 [2024-07-22 15:04:55.812856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.286 [2024-07-22 15:04:55.812874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.286 [2024-07-22 15:04:55.815772] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.286 [2024-07-22 15:04:55.815935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.286 [2024-07-22 15:04:55.815953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.286 [2024-07-22 15:04:55.819048] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.286 [2024-07-22 15:04:55.819207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.286 [2024-07-22 15:04:55.819226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.286 [2024-07-22 15:04:55.822119] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.286 [2024-07-22 15:04:55.822221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.286 [2024-07-22 15:04:55.822240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.286 [2024-07-22 15:04:55.825203] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.286 [2024-07-22 15:04:55.825352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.286 [2024-07-22 15:04:55.825374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.286 [2024-07-22 15:04:55.828557] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.286 [2024-07-22 15:04:55.828642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.286 [2024-07-22 15:04:55.828678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.286 [2024-07-22 15:04:55.831873] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.286 [2024-07-22 15:04:55.832016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.286 [2024-07-22 15:04:55.832036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.286 [2024-07-22 15:04:55.835066] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.286 [2024-07-22 15:04:55.835197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.286 [2024-07-22 15:04:55.835215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.286 [2024-07-22 15:04:55.838145] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.286 [2024-07-22 15:04:55.838235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.286 [2024-07-22 15:04:55.838255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.286 [2024-07-22 15:04:55.841321] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.286 [2024-07-22 15:04:55.841445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.287 [2024-07-22 15:04:55.841465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.287 [2024-07-22 15:04:55.844808] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.287 [2024-07-22 15:04:55.844953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.287 [2024-07-22 15:04:55.844971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.287 [2024-07-22 15:04:55.848003] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.287 [2024-07-22 15:04:55.848106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.287 [2024-07-22 15:04:55.848126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.287 [2024-07-22 15:04:55.851183] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.287 [2024-07-22 15:04:55.851317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.287 [2024-07-22 15:04:55.851337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.287 [2024-07-22 15:04:55.854439] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.287 [2024-07-22 15:04:55.854544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.287 [2024-07-22 15:04:55.854564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.287 [2024-07-22 15:04:55.857682] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.287 [2024-07-22 15:04:55.857779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.287 [2024-07-22 15:04:55.857811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.287 [2024-07-22 15:04:55.860829] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.287 [2024-07-22 15:04:55.860983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.287 [2024-07-22 15:04:55.861002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.287 [2024-07-22 15:04:55.863985] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.287 [2024-07-22 15:04:55.864145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.287 [2024-07-22 15:04:55.864163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.287 [2024-07-22 15:04:55.867368] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.287 [2024-07-22 15:04:55.867500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.287 [2024-07-22 15:04:55.867518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.287 [2024-07-22 15:04:55.870622] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.287 [2024-07-22 15:04:55.870811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.287 [2024-07-22 15:04:55.870831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.287 [2024-07-22 15:04:55.873926] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.287 [2024-07-22 15:04:55.874064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.287 [2024-07-22 15:04:55.874082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.287 [2024-07-22 15:04:55.877078] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.287 [2024-07-22 15:04:55.877222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.287 [2024-07-22 15:04:55.877240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.287 [2024-07-22 15:04:55.880176] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.287 [2024-07-22 15:04:55.880324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.287 [2024-07-22 15:04:55.880344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.287 [2024-07-22 15:04:55.883293] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.287 [2024-07-22 15:04:55.883470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.287 [2024-07-22 15:04:55.883488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.287 [2024-07-22 15:04:55.886674] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.287 [2024-07-22 15:04:55.886842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.287 [2024-07-22 15:04:55.886862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.287 [2024-07-22 15:04:55.889955] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.287 [2024-07-22 15:04:55.890161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.287 [2024-07-22 15:04:55.890179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.287 [2024-07-22 15:04:55.893139] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.287 [2024-07-22 15:04:55.893299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.287 [2024-07-22 15:04:55.893317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.287 [2024-07-22 15:04:55.896292] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.287 [2024-07-22 15:04:55.896484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.287 [2024-07-22 15:04:55.896501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.287 [2024-07-22 15:04:55.899435] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.287 [2024-07-22 15:04:55.899593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.287 [2024-07-22 15:04:55.899611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.287 [2024-07-22 15:04:55.902592] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.287 [2024-07-22 15:04:55.902796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.287 [2024-07-22 15:04:55.902825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.287 [2024-07-22 15:04:55.905694] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.287 [2024-07-22 15:04:55.905880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.287 [2024-07-22 15:04:55.905898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.287 [2024-07-22 15:04:55.909147] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.287 [2024-07-22 15:04:55.909313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.287 [2024-07-22 15:04:55.909332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.550 [2024-07-22 15:04:55.912253] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.550 [2024-07-22 15:04:55.912452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.550 [2024-07-22 15:04:55.912469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.550 [2024-07-22 15:04:55.915359] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.550 [2024-07-22 15:04:55.915516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.550 [2024-07-22 15:04:55.915534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.550 [2024-07-22 15:04:55.918563] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.550 [2024-07-22 15:04:55.918714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.550 [2024-07-22 15:04:55.918745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.550 [2024-07-22 15:04:55.921751] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.550 [2024-07-22 15:04:55.921930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.550 [2024-07-22 15:04:55.921948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.550 [2024-07-22 15:04:55.924969] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.550 [2024-07-22 15:04:55.925117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.550 [2024-07-22 15:04:55.925157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.550 [2024-07-22 15:04:55.928309] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.550 [2024-07-22 15:04:55.928410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.550 [2024-07-22 15:04:55.928429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.550 [2024-07-22 15:04:55.931642] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.550 [2024-07-22 15:04:55.931776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.550 [2024-07-22 15:04:55.931794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.550 [2024-07-22 15:04:55.934832] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.550 [2024-07-22 15:04:55.934936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.550 [2024-07-22 15:04:55.934956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.550 [2024-07-22 15:04:55.937918] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.550 [2024-07-22 15:04:55.938054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.550 [2024-07-22 15:04:55.938074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.550 [2024-07-22 15:04:55.941078] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.550 [2024-07-22 15:04:55.941387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.550 [2024-07-22 15:04:55.941426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.550 [2024-07-22 15:04:55.944239] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.550 [2024-07-22 15:04:55.944444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.550 [2024-07-22 15:04:55.944462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.550 [2024-07-22 15:04:55.947502] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.550 [2024-07-22 15:04:55.947690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.550 [2024-07-22 15:04:55.947719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.550 [2024-07-22 15:04:55.950967] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.550 [2024-07-22 15:04:55.951147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.550 [2024-07-22 15:04:55.951165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.550 [2024-07-22 15:04:55.954181] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.550 [2024-07-22 15:04:55.954384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.550 [2024-07-22 15:04:55.954402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.550 [2024-07-22 15:04:55.957390] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.550 [2024-07-22 15:04:55.957601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.550 [2024-07-22 15:04:55.957630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.550 [2024-07-22 15:04:55.960554] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.550 [2024-07-22 15:04:55.960821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.550 [2024-07-22 15:04:55.960841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.550 [2024-07-22 15:04:55.963750] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.550 [2024-07-22 15:04:55.963970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.550 [2024-07-22 15:04:55.963988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.550 [2024-07-22 15:04:55.966861] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.550 [2024-07-22 15:04:55.967058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.551 [2024-07-22 15:04:55.967077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.551 [2024-07-22 15:04:55.970204] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.551 [2024-07-22 15:04:55.970433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.551 [2024-07-22 15:04:55.970451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.551 [2024-07-22 15:04:55.973306] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.551 [2024-07-22 15:04:55.973518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.551 [2024-07-22 15:04:55.973544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.551 [2024-07-22 15:04:55.976454] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.551 [2024-07-22 15:04:55.976722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.551 [2024-07-22 15:04:55.976740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.551 [2024-07-22 15:04:55.979673] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.551 [2024-07-22 15:04:55.979872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.551 [2024-07-22 15:04:55.979890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.551 [2024-07-22 15:04:55.982783] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.551 [2024-07-22 15:04:55.983044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.551 [2024-07-22 15:04:55.983078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.551 [2024-07-22 15:04:55.985807] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.551 [2024-07-22 15:04:55.986047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.551 [2024-07-22 15:04:55.986076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.551 [2024-07-22 15:04:55.988890] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.551 [2024-07-22 15:04:55.989250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.551 [2024-07-22 15:04:55.989291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.551 [2024-07-22 15:04:55.992207] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.551 [2024-07-22 15:04:55.992364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.551 [2024-07-22 15:04:55.992382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.551 [2024-07-22 15:04:55.995509] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.551 [2024-07-22 15:04:55.995592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.551 [2024-07-22 15:04:55.995610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.551 [2024-07-22 15:04:55.998637] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.551 [2024-07-22 15:04:55.998808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.551 [2024-07-22 15:04:55.998836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.551 [2024-07-22 15:04:56.001699] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.551 [2024-07-22 15:04:56.001850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.551 [2024-07-22 15:04:56.001878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.551 [2024-07-22 15:04:56.004855] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.551 [2024-07-22 15:04:56.004962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.551 [2024-07-22 15:04:56.004980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.551 [2024-07-22 15:04:56.008138] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.551 [2024-07-22 15:04:56.008234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.551 [2024-07-22 15:04:56.008252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.551 [2024-07-22 15:04:56.011374] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.551 [2024-07-22 15:04:56.011468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.551 [2024-07-22 15:04:56.011486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.551 [2024-07-22 15:04:56.014448] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.551 [2024-07-22 15:04:56.014555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.551 [2024-07-22 15:04:56.014574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.551 [2024-07-22 15:04:56.017515] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.551 [2024-07-22 15:04:56.017745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.551 [2024-07-22 15:04:56.017764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.551 [2024-07-22 15:04:56.020580] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.551 [2024-07-22 15:04:56.020804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.551 [2024-07-22 15:04:56.020822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.551 [2024-07-22 15:04:56.023668] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.551 [2024-07-22 15:04:56.023863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.551 [2024-07-22 15:04:56.023880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.551 [2024-07-22 15:04:56.026751] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.551 [2024-07-22 15:04:56.026866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.551 [2024-07-22 15:04:56.026884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.551 [2024-07-22 15:04:56.030026] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.551 [2024-07-22 15:04:56.030319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.551 [2024-07-22 15:04:56.030356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.551 [2024-07-22 15:04:56.033086] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.551 [2024-07-22 15:04:56.033243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.551 [2024-07-22 15:04:56.033262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.551 [2024-07-22 15:04:56.036180] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.551 [2024-07-22 15:04:56.036341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.551 [2024-07-22 15:04:56.036358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.551 [2024-07-22 15:04:56.039234] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.551 [2024-07-22 15:04:56.039372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.551 [2024-07-22 15:04:56.039390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.551 [2024-07-22 15:04:56.042294] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.551 [2024-07-22 15:04:56.042428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.551 [2024-07-22 15:04:56.042446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.551 [2024-07-22 15:04:56.045394] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.551 [2024-07-22 15:04:56.045482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.552 [2024-07-22 15:04:56.045500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.552 [2024-07-22 15:04:56.048471] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.552 [2024-07-22 15:04:56.048591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.552 [2024-07-22 15:04:56.048638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.552 [2024-07-22 15:04:56.051858] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.552 [2024-07-22 15:04:56.052013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.552 [2024-07-22 15:04:56.052031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.552 [2024-07-22 15:04:56.055059] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.552 [2024-07-22 15:04:56.055231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.552 [2024-07-22 15:04:56.055249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.552 [2024-07-22 15:04:56.058151] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.552 [2024-07-22 15:04:56.058297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.552 [2024-07-22 15:04:56.058318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.552 [2024-07-22 15:04:56.061308] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.552 [2024-07-22 15:04:56.061459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.552 [2024-07-22 15:04:56.061479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.552 [2024-07-22 15:04:56.064433] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.552 [2024-07-22 15:04:56.064530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.552 [2024-07-22 15:04:56.064548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.552 [2024-07-22 15:04:56.067651] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.552 [2024-07-22 15:04:56.067819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.552 [2024-07-22 15:04:56.067837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.552 [2024-07-22 15:04:56.071006] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.552 [2024-07-22 15:04:56.071127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.552 [2024-07-22 15:04:56.071146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.552 [2024-07-22 15:04:56.074081] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.552 [2024-07-22 15:04:56.074216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.552 [2024-07-22 15:04:56.074235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.552 [2024-07-22 15:04:56.077285] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.552 [2024-07-22 15:04:56.077406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.552 [2024-07-22 15:04:56.077425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.552 [2024-07-22 15:04:56.080442] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.552 [2024-07-22 15:04:56.080575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.552 [2024-07-22 15:04:56.080593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.552 [2024-07-22 15:04:56.083595] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.552 [2024-07-22 15:04:56.083725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.552 [2024-07-22 15:04:56.083743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.552 [2024-07-22 15:04:56.086731] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.552 [2024-07-22 15:04:56.086855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.552 [2024-07-22 15:04:56.086874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.552 [2024-07-22 15:04:56.089844] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.552 [2024-07-22 15:04:56.089923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.552 [2024-07-22 15:04:56.089941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.552 [2024-07-22 15:04:56.093197] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.552 [2024-07-22 15:04:56.093292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.552 [2024-07-22 15:04:56.093312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.552 [2024-07-22 15:04:56.096333] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.552 [2024-07-22 15:04:56.096469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.552 [2024-07-22 15:04:56.096487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.552 [2024-07-22 15:04:56.099522] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.552 [2024-07-22 15:04:56.099674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.552 [2024-07-22 15:04:56.099706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.552 [2024-07-22 15:04:56.102675] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.552 [2024-07-22 15:04:56.102815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.552 [2024-07-22 15:04:56.102833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.552 [2024-07-22 15:04:56.106103] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.552 [2024-07-22 15:04:56.106211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.552 [2024-07-22 15:04:56.106230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.552 [2024-07-22 15:04:56.109368] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.552 [2024-07-22 15:04:56.109518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.552 [2024-07-22 15:04:56.109538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.552 [2024-07-22 15:04:56.112491] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.552 [2024-07-22 15:04:56.112652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.552 [2024-07-22 15:04:56.112672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.552 [2024-07-22 15:04:56.115810] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.552 [2024-07-22 15:04:56.115955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.552 [2024-07-22 15:04:56.115973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.552 [2024-07-22 15:04:56.119021] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.552 [2024-07-22 15:04:56.119161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.552 [2024-07-22 15:04:56.119179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.552 [2024-07-22 15:04:56.122199] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.552 [2024-07-22 15:04:56.122359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.552 [2024-07-22 15:04:56.122377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.552 [2024-07-22 15:04:56.125370] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.552 [2024-07-22 15:04:56.125498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.553 [2024-07-22 15:04:56.125519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.553 [2024-07-22 15:04:56.128462] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.553 [2024-07-22 15:04:56.128613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.553 [2024-07-22 15:04:56.128648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.553 [2024-07-22 15:04:56.131965] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.553 [2024-07-22 15:04:56.132166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.553 [2024-07-22 15:04:56.132184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.553 [2024-07-22 15:04:56.135143] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.553 [2024-07-22 15:04:56.135293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.553 [2024-07-22 15:04:56.135312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.553 [2024-07-22 15:04:56.138252] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.553 [2024-07-22 15:04:56.138389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.553 [2024-07-22 15:04:56.138408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.553 [2024-07-22 15:04:56.141421] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.553 [2024-07-22 15:04:56.141583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.553 [2024-07-22 15:04:56.141603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.553 [2024-07-22 15:04:56.144523] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.553 [2024-07-22 15:04:56.144743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.553 [2024-07-22 15:04:56.144763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.553 [2024-07-22 15:04:56.147734] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.553 [2024-07-22 15:04:56.147930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.553 [2024-07-22 15:04:56.147947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.553 [2024-07-22 15:04:56.150877] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.553 [2024-07-22 15:04:56.151069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.553 [2024-07-22 15:04:56.151088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.553 [2024-07-22 15:04:56.154083] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.553 [2024-07-22 15:04:56.154401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.553 [2024-07-22 15:04:56.154430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.553 [2024-07-22 15:04:56.157248] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.553 [2024-07-22 15:04:56.157407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.553 [2024-07-22 15:04:56.157425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.553 [2024-07-22 15:04:56.160328] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.553 [2024-07-22 15:04:56.160474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.553 [2024-07-22 15:04:56.160492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.553 [2024-07-22 15:04:56.163481] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.553 [2024-07-22 15:04:56.163627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.553 [2024-07-22 15:04:56.163647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.553 [2024-07-22 15:04:56.166651] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.553 [2024-07-22 15:04:56.166827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.553 [2024-07-22 15:04:56.166856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.553 [2024-07-22 15:04:56.169900] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.553 [2024-07-22 15:04:56.170016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.553 [2024-07-22 15:04:56.170035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.553 [2024-07-22 15:04:56.173056] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.553 [2024-07-22 15:04:56.173188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.553 [2024-07-22 15:04:56.173207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.553 [2024-07-22 15:04:56.176301] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.553 [2024-07-22 15:04:56.176443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.553 [2024-07-22 15:04:56.176461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.815 [2024-07-22 15:04:56.179501] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.815 [2024-07-22 15:04:56.179572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.815 [2024-07-22 15:04:56.179591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.815 [2024-07-22 15:04:56.182527] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.815 [2024-07-22 15:04:56.182596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.815 [2024-07-22 15:04:56.182615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.815 [2024-07-22 15:04:56.185689] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.815 [2024-07-22 15:04:56.185760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.815 [2024-07-22 15:04:56.185791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.815 [2024-07-22 15:04:56.188841] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.815 [2024-07-22 15:04:56.188952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.815 [2024-07-22 15:04:56.188971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.815 [2024-07-22 15:04:56.191952] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.815 [2024-07-22 15:04:56.192083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.815 [2024-07-22 15:04:56.192100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.815 [2024-07-22 15:04:56.195426] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.815 [2024-07-22 15:04:56.195532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.815 [2024-07-22 15:04:56.195550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.815 [2024-07-22 15:04:56.198658] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.815 [2024-07-22 15:04:56.198816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.815 [2024-07-22 15:04:56.198837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.815 [2024-07-22 15:04:56.201752] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.815 [2024-07-22 15:04:56.201873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.815 [2024-07-22 15:04:56.201892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.815 [2024-07-22 15:04:56.204848] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.815 [2024-07-22 15:04:56.204963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.815 [2024-07-22 15:04:56.204983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.815 [2024-07-22 15:04:56.208029] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.815 [2024-07-22 15:04:56.208125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.815 [2024-07-22 15:04:56.208143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.815 [2024-07-22 15:04:56.211203] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.815 [2024-07-22 15:04:56.211307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.815 [2024-07-22 15:04:56.211326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.815 [2024-07-22 15:04:56.214284] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.815 [2024-07-22 15:04:56.214393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.815 [2024-07-22 15:04:56.214414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.815 [2024-07-22 15:04:56.217341] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.815 [2024-07-22 15:04:56.217498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.815 [2024-07-22 15:04:56.217515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.815 [2024-07-22 15:04:56.220622] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.815 [2024-07-22 15:04:56.220813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.815 [2024-07-22 15:04:56.220832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.815 [2024-07-22 15:04:56.223934] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.815 [2024-07-22 15:04:56.224072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.815 [2024-07-22 15:04:56.224091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.815 [2024-07-22 15:04:56.227054] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.815 [2024-07-22 15:04:56.227208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.815 [2024-07-22 15:04:56.227226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.815 [2024-07-22 15:04:56.230307] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.815 [2024-07-22 15:04:56.230408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.815 [2024-07-22 15:04:56.230427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.815 [2024-07-22 15:04:56.233378] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.816 [2024-07-22 15:04:56.233523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.816 [2024-07-22 15:04:56.233541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.816 [2024-07-22 15:04:56.236477] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.816 [2024-07-22 15:04:56.236635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.816 [2024-07-22 15:04:56.236655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.816 [2024-07-22 15:04:56.239760] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.816 [2024-07-22 15:04:56.239894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.816 [2024-07-22 15:04:56.239912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.816 [2024-07-22 15:04:56.243065] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.816 [2024-07-22 15:04:56.243201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.816 [2024-07-22 15:04:56.243219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.816 [2024-07-22 15:04:56.246258] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.816 [2024-07-22 15:04:56.246410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.816 [2024-07-22 15:04:56.246427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.816 [2024-07-22 15:04:56.249367] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.816 [2024-07-22 15:04:56.249638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.816 [2024-07-22 15:04:56.249684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.816 [2024-07-22 15:04:56.252489] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.816 [2024-07-22 15:04:56.252802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.816 [2024-07-22 15:04:56.252824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.816 [2024-07-22 15:04:56.255746] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.816 [2024-07-22 15:04:56.255978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.816 [2024-07-22 15:04:56.255994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.816 [2024-07-22 15:04:56.258914] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.816 [2024-07-22 15:04:56.259111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.816 [2024-07-22 15:04:56.259129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.816 [2024-07-22 15:04:56.262275] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.816 [2024-07-22 15:04:56.262515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.816 [2024-07-22 15:04:56.262544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.816 [2024-07-22 15:04:56.265314] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.816 [2024-07-22 15:04:56.265617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.816 [2024-07-22 15:04:56.265655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.816 [2024-07-22 15:04:56.268431] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.816 [2024-07-22 15:04:56.268661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.816 [2024-07-22 15:04:56.268679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.816 [2024-07-22 15:04:56.271623] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.816 [2024-07-22 15:04:56.271819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.816 [2024-07-22 15:04:56.271837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.816 [2024-07-22 15:04:56.274781] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.816 [2024-07-22 15:04:56.274862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.816 [2024-07-22 15:04:56.274880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.816 [2024-07-22 15:04:56.277767] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.816 [2024-07-22 15:04:56.277980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.816 [2024-07-22 15:04:56.277998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:51:36.816 [2024-07-22 15:04:56.281250] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.816 [2024-07-22 15:04:56.281378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.816 [2024-07-22 15:04:56.281395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:51:36.816 [2024-07-22 15:04:56.284368] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.816 [2024-07-22 15:04:56.284520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.816 [2024-07-22 15:04:56.284538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:51:36.816 [2024-07-22 15:04:56.287577] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f9d30) with pdu=0x2000190fef90 00:51:36.816 [2024-07-22 15:04:56.287757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:51:36.816 [2024-07-22 15:04:56.287775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:51:36.816 00:51:36.816 Latency(us) 00:51:36.816 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:51:36.816 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:51:36.816 nvme0n1 : 2.00 9275.64 1159.45 0.00 0.00 1721.04 1144.73 10016.42 00:51:36.816 =================================================================================================================== 00:51:36.816 Total : 9275.64 1159.45 0.00 0.00 1721.04 1144.73 10016.42 00:51:36.816 0 00:51:36.816 15:04:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:51:36.816 15:04:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:51:36.816 15:04:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:51:36.816 15:04:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:51:36.816 | .driver_specific 00:51:36.816 | .nvme_error 00:51:36.816 | .status_code 00:51:36.816 | .command_transient_transport_error' 00:51:37.076 15:04:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 598 > 0 )) 00:51:37.076 15:04:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 111645 00:51:37.076 15:04:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 111645 ']' 00:51:37.076 15:04:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 111645 00:51:37.076 15:04:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:51:37.076 15:04:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:51:37.076 15:04:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 111645 00:51:37.076 15:04:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:51:37.076 15:04:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:51:37.076 15:04:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 111645' 00:51:37.076 killing process with pid 111645 00:51:37.076 15:04:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 111645 00:51:37.076 Received shutdown signal, test time was about 2.000000 seconds 00:51:37.076 00:51:37.076 Latency(us) 00:51:37.076 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:51:37.076 =================================================================================================================== 00:51:37.076 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:51:37.076 15:04:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 111645 00:51:37.335 15:04:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 111335 00:51:37.335 15:04:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 111335 ']' 00:51:37.335 15:04:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 111335 00:51:37.335 15:04:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:51:37.335 15:04:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:51:37.335 15:04:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 111335 00:51:37.335 15:04:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:51:37.335 15:04:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:51:37.335 15:04:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 111335' 00:51:37.335 killing process with pid 111335 00:51:37.335 15:04:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 111335 00:51:37.335 15:04:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 111335 00:51:37.594 00:51:37.594 real 0m17.302s 00:51:37.594 user 0m31.054s 00:51:37.594 sys 0m5.056s 00:51:37.594 15:04:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:51:37.594 15:04:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:51:37.594 ************************************ 00:51:37.594 END TEST nvmf_digest_error 00:51:37.594 ************************************ 00:51:37.594 15:04:57 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:51:37.594 15:04:57 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:51:37.594 15:04:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:51:37.594 15:04:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:51:37.594 15:04:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:51:37.594 15:04:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:51:37.594 15:04:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:51:37.594 15:04:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:51:37.594 rmmod nvme_tcp 00:51:37.594 rmmod nvme_fabrics 00:51:37.594 rmmod nvme_keyring 00:51:37.594 15:04:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:51:37.594 15:04:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:51:37.594 15:04:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:51:37.594 15:04:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 111335 ']' 00:51:37.594 15:04:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 111335 00:51:37.594 15:04:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # '[' -z 111335 ']' 00:51:37.594 15:04:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@950 -- # kill -0 111335 00:51:37.594 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (111335) - No such process 00:51:37.594 Process with pid 111335 is not found 00:51:37.594 15:04:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@973 -- # echo 'Process with pid 111335 is not found' 00:51:37.594 15:04:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:51:37.594 15:04:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:51:37.594 15:04:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:51:37.594 15:04:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:51:37.594 15:04:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:51:37.594 15:04:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:51:37.594 15:04:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:51:37.594 15:04:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:51:37.853 15:04:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:51:37.853 00:51:37.853 real 0m35.241s 00:51:37.853 user 1m3.050s 00:51:37.853 sys 0m9.854s 00:51:37.853 15:04:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:51:37.853 15:04:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:51:37.853 ************************************ 00:51:37.853 END TEST nvmf_digest 00:51:37.853 ************************************ 00:51:37.853 15:04:57 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 1 -eq 1 ]] 00:51:37.853 15:04:57 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ tcp == \t\c\p ]] 00:51:37.853 15:04:57 nvmf_tcp -- nvmf/nvmf.sh@113 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:51:37.853 15:04:57 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:51:37.853 15:04:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:51:37.853 15:04:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:51:37.853 ************************************ 00:51:37.853 START TEST nvmf_mdns_discovery 00:51:37.853 ************************************ 00:51:37.853 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:51:37.853 * Looking for test storage... 00:51:37.853 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:51:37.853 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:51:37.853 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:51:37.853 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:51:37.853 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:51:37.853 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:51:37.853 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:51:37.853 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:51:37.853 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:51:37.853 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:51:37.853 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:51:37.853 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:51:37.853 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:51:37.853 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:51:37.853 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:51:37.853 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:51:37.853 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:51:37.853 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:51:37.853 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:51:37.853 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:51:37.853 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:51:37.853 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:51:37.853 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@47 -- # : 0 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:51:38.113 Cannot find device "nvmf_tgt_br" 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # true 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:51:38.113 Cannot find device "nvmf_tgt_br2" 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # true 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:51:38.113 Cannot find device "nvmf_tgt_br" 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # true 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:51:38.113 Cannot find device "nvmf_tgt_br2" 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # true 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:51:38.113 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:51:38.113 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:51:38.113 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:51:38.114 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:51:38.114 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:51:38.114 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:51:38.373 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:51:38.373 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:51:38.373 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:51:38.373 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:51:38.373 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:51:38.373 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:51:38.373 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:51:38.373 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:51:38.373 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:51:38.373 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:51:38.373 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:51:38.373 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:51:38.373 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:51:38.373 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:51:38.373 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:51:38.373 00:51:38.373 --- 10.0.0.2 ping statistics --- 00:51:38.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:51:38.373 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:51:38.373 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:51:38.373 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:51:38.373 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.028 ms 00:51:38.373 00:51:38.373 --- 10.0.0.3 ping statistics --- 00:51:38.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:51:38.373 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:51:38.373 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:51:38.373 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:51:38.373 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:51:38.373 00:51:38.373 --- 10.0.0.1 ping statistics --- 00:51:38.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:51:38.373 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:51:38.373 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:51:38.373 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@433 -- # return 0 00:51:38.373 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:51:38.373 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:51:38.373 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:51:38.373 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:51:38.373 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:51:38.373 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:51:38.373 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:51:38.373 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:51:38.373 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:51:38.373 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:51:38.373 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:38.373 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@481 -- # nvmfpid=111939 00:51:38.373 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:51:38.373 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@482 -- # waitforlisten 111939 00:51:38.373 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@827 -- # '[' -z 111939 ']' 00:51:38.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:51:38.373 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:51:38.373 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:51:38.373 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:51:38.373 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:51:38.373 15:04:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:38.373 [2024-07-22 15:04:57.862383] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:51:38.373 [2024-07-22 15:04:57.862457] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:51:38.373 [2024-07-22 15:04:57.999400] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:38.632 [2024-07-22 15:04:58.046953] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:51:38.632 [2024-07-22 15:04:58.046999] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:51:38.632 [2024-07-22 15:04:58.047005] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:51:38.632 [2024-07-22 15:04:58.047021] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:51:38.632 [2024-07-22 15:04:58.047025] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:51:38.632 [2024-07-22 15:04:58.047043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:51:39.200 15:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:51:39.200 15:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@860 -- # return 0 00:51:39.200 15:04:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:51:39.200 15:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:51:39.200 15:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:39.200 15:04:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:51:39.200 15:04:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:51:39.200 15:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:39.200 15:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:39.200 15:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:39.200 15:04:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 00:51:39.200 15:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:39.200 15:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:39.463 15:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:39.463 15:04:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:51:39.463 15:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:39.463 15:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:39.463 [2024-07-22 15:04:58.868828] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:51:39.463 15:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:39.463 15:04:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:51:39.463 15:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:39.463 15:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:39.463 [2024-07-22 15:04:58.880909] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:51:39.463 15:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:39.463 15:04:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 00:51:39.463 15:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:39.463 15:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:39.463 null0 00:51:39.463 15:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:39.463 15:04:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 00:51:39.463 15:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:39.463 15:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:39.463 null1 00:51:39.463 15:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:39.463 15:04:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 00:51:39.463 15:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:39.463 15:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:39.463 null2 00:51:39.463 15:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:39.463 15:04:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 00:51:39.463 15:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:39.463 15:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:39.463 null3 00:51:39.463 15:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:39.463 15:04:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 00:51:39.463 15:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:39.463 15:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:39.463 15:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:39.463 15:04:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=111991 00:51:39.463 15:04:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:51:39.463 15:04:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 111991 /tmp/host.sock 00:51:39.463 15:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@827 -- # '[' -z 111991 ']' 00:51:39.463 15:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:51:39.463 15:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:51:39.463 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:51:39.463 15:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:51:39.463 15:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:51:39.463 15:04:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:39.463 [2024-07-22 15:04:58.997195] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:51:39.463 [2024-07-22 15:04:58.997260] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111991 ] 00:51:39.722 [2024-07-22 15:04:59.134278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:51:39.722 [2024-07-22 15:04:59.184222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:51:40.353 15:04:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:51:40.353 15:04:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@860 -- # return 0 00:51:40.353 15:04:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:51:40.353 15:04:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 00:51:40.353 15:04:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 00:51:40.611 15:05:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=112020 00:51:40.611 15:05:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 00:51:40.611 15:05:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:51:40.611 15:05:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:51:40.611 Process 984 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:51:40.611 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:51:40.611 Successfully dropped root privileges. 00:51:40.611 avahi-daemon 0.8 starting up. 00:51:40.611 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:51:40.611 Successfully called chroot(). 00:51:40.611 Successfully dropped remaining capabilities. 00:51:40.611 No service file found in /etc/avahi/services. 00:51:40.611 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:51:40.611 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:51:40.611 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:51:40.611 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:51:40.611 Network interface enumeration completed. 00:51:40.611 Registering new address record for fe80::1469:2fff:fef4:f245 on nvmf_tgt_if2.*. 00:51:40.611 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:51:40.611 Registering new address record for fe80::e073:5fff:fecc:6446 on nvmf_tgt_if.*. 00:51:40.611 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:51:41.546 Server startup complete. Host name is fedora38-cloud-1716830599-074-updated-1705279005.local. Local service cookie is 2327180598. 00:51:41.546 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:51:41.546 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:41.546 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:41.546 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:41.546 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:51:41.546 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:41.546 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:41.546 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:41.546 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # notify_id=0 00:51:41.546 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # get_subsystem_names 00:51:41.546 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:51:41.546 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:51:41.546 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:41.546 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:41.546 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:51:41.546 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:51:41.546 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:41.546 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:51:41.546 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # get_bdev_list 00:51:41.546 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:51:41.546 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:51:41.546 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:51:41.546 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:51:41.546 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:41.546 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:41.546 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:41.546 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # [[ '' == '' ]] 00:51:41.546 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:51:41.546 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:41.546 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:41.546 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:41.546 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # get_subsystem_names 00:51:41.546 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:51:41.546 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:41.546 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:51:41.546 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:41.546 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:51:41.546 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:51:41.547 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:41.805 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:51:41.805 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # get_bdev_list 00:51:41.805 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:51:41.805 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:51:41.805 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:51:41.805 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:51:41.805 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:41.805 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:41.805 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:41.805 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ '' == '' ]] 00:51:41.805 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:51:41.805 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:41.805 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:41.805 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:41.805 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # get_subsystem_names 00:51:41.805 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:51:41.805 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:41.805 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:41.805 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:51:41.805 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:51:41.805 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:51:41.805 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:41.805 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:51:41.805 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # get_bdev_list 00:51:41.805 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:51:41.805 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:51:41.805 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:51:41.805 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:51:41.805 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:41.805 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:41.805 [2024-07-22 15:05:01.336345] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:51:41.805 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:41.805 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # [[ '' == '' ]] 00:51:41.805 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:51:41.805 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:41.805 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:41.805 [2024-07-22 15:05:01.392694] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:51:41.805 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:41.805 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@109 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:51:41.805 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:41.805 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:41.805 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:41.805 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:51:41.805 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:41.805 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:41.805 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:41.805 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@113 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:51:41.805 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:41.805 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:41.805 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:41.805 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:51:41.805 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:41.805 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:42.064 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:42.064 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@119 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:51:42.064 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:42.064 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:42.064 [2024-07-22 15:05:01.452528] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:51:42.064 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:42.064 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:51:42.064 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:42.064 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:42.064 [2024-07-22 15:05:01.464520] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:51:42.064 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:42.064 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # rpc_cmd nvmf_publish_mdns_prr 00:51:42.064 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:42.064 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:42.064 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:42.064 15:05:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # sleep 5 00:51:42.631 [2024-07-22 15:05:02.234619] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:51:43.567 [2024-07-22 15:05:02.833478] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:51:43.567 [2024-07-22 15:05:02.833507] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:51:43.567 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" "nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:51:43.567 cookie is 0 00:51:43.567 is_local: 1 00:51:43.567 our_own: 0 00:51:43.567 wide_area: 0 00:51:43.567 multicast: 1 00:51:43.567 cached: 1 00:51:43.567 [2024-07-22 15:05:02.933276] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:51:43.567 [2024-07-22 15:05:02.933293] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:51:43.567 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:51:43.567 cookie is 0 00:51:43.567 is_local: 1 00:51:43.567 our_own: 0 00:51:43.567 wide_area: 0 00:51:43.567 multicast: 1 00:51:43.567 cached: 1 00:51:43.567 [2024-07-22 15:05:02.933304] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:51:43.567 [2024-07-22 15:05:03.033084] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:51:43.567 [2024-07-22 15:05:03.033099] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:51:43.567 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" "nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:51:43.567 cookie is 0 00:51:43.567 is_local: 1 00:51:43.567 our_own: 0 00:51:43.567 wide_area: 0 00:51:43.567 multicast: 1 00:51:43.567 cached: 1 00:51:43.567 [2024-07-22 15:05:03.132890] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:51:43.567 [2024-07-22 15:05:03.132903] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:51:43.567 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:51:43.567 cookie is 0 00:51:43.567 is_local: 1 00:51:43.567 our_own: 0 00:51:43.567 wide_area: 0 00:51:43.567 multicast: 1 00:51:43.567 cached: 1 00:51:43.567 [2024-07-22 15:05:03.132909] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:51:44.503 [2024-07-22 15:05:03.843393] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:51:44.503 [2024-07-22 15:05:03.843413] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:51:44.503 [2024-07-22 15:05:03.843425] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:51:44.503 [2024-07-22 15:05:03.929329] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:51:44.503 [2024-07-22 15:05:03.984524] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:51:44.503 [2024-07-22 15:05:03.984551] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:51:44.503 [2024-07-22 15:05:04.042844] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:51:44.503 [2024-07-22 15:05:04.042869] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:51:44.503 [2024-07-22 15:05:04.042879] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:51:44.503 [2024-07-22 15:05:04.128787] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:51:44.761 [2024-07-22 15:05:04.183329] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:51:44.761 [2024-07-22 15:05:04.183358] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # get_notification_count 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=2 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:47.299 15:05:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@139 -- # sleep 1 00:51:48.237 15:05:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:51:48.238 15:05:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:51:48.238 15:05:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:48.238 15:05:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:51:48.238 15:05:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:48.238 15:05:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:51:48.238 15:05:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:51:48.497 15:05:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:48.497 15:05:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:51:48.497 15:05:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@142 -- # get_notification_count 00:51:48.497 15:05:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:51:48.497 15:05:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:51:48.497 15:05:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:48.497 15:05:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:48.497 15:05:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:48.497 15:05:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:51:48.497 15:05:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:51:48.497 15:05:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:51:48.497 15:05:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:51:48.497 15:05:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:48.498 15:05:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:48.498 [2024-07-22 15:05:07.970255] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:51:48.498 [2024-07-22 15:05:07.970497] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:51:48.498 [2024-07-22 15:05:07.970527] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:51:48.498 [2024-07-22 15:05:07.970550] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:51:48.498 [2024-07-22 15:05:07.970560] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:51:48.498 15:05:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:48.498 15:05:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:51:48.498 15:05:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:48.498 15:05:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:48.498 [2024-07-22 15:05:07.982208] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:51:48.498 [2024-07-22 15:05:07.982467] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:51:48.498 [2024-07-22 15:05:07.982501] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:51:48.498 15:05:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:48.498 15:05:07 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 1 00:51:48.498 [2024-07-22 15:05:08.113320] bdev_nvme.c:6908:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:51:48.498 [2024-07-22 15:05:08.113464] bdev_nvme.c:6908:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:51:48.758 [2024-07-22 15:05:08.170418] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:51:48.758 [2024-07-22 15:05:08.170439] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:51:48.758 [2024-07-22 15:05:08.170442] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:51:48.758 [2024-07-22 15:05:08.170453] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:51:48.758 [2024-07-22 15:05:08.170500] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:51:48.758 [2024-07-22 15:05:08.170504] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:51:48.758 [2024-07-22 15:05:08.170507] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:51:48.758 [2024-07-22 15:05:08.170515] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:51:48.758 [2024-07-22 15:05:08.216190] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:51:48.758 [2024-07-22 15:05:08.216206] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:51:48.758 [2024-07-22 15:05:08.216228] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:51:48.758 [2024-07-22 15:05:08.216232] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:51:49.699 15:05:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:51:49.699 15:05:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:51:49.699 15:05:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:51:49.699 15:05:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:49.699 15:05:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:49.699 15:05:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:51:49.699 15:05:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:51:49.699 15:05:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:49.699 15:05:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:51:49.699 15:05:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:51:49.699 15:05:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:51:49.699 15:05:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:49.699 15:05:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:49.699 15:05:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:51:49.699 15:05:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:51:49.699 15:05:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:51:49.699 15:05:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:49.699 15:05:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:51:49.699 15:05:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:51:49.699 15:05:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:51:49.699 15:05:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:49.699 15:05:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:49.699 15:05:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:51:49.699 15:05:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:51:49.699 15:05:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:51:49.699 15:05:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:49.699 15:05:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:51:49.699 15:05:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:51:49.699 15:05:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:51:49.699 15:05:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:51:49.699 15:05:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:49.699 15:05:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:51:49.699 15:05:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:49.699 15:05:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:51:49.699 15:05:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:49.699 15:05:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:51:49.699 15:05:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@155 -- # get_notification_count 00:51:49.699 15:05:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:51:49.699 15:05:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:51:49.699 15:05:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:49.699 15:05:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:49.699 15:05:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:49.699 15:05:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:51:49.699 15:05:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:51:49.699 15:05:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:51:49.699 15:05:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:51:49.699 15:05:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:49.699 15:05:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:49.699 [2024-07-22 15:05:09.228965] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:51:49.699 [2024-07-22 15:05:09.228996] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:51:49.699 [2024-07-22 15:05:09.229033] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:51:49.699 [2024-07-22 15:05:09.229042] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:51:49.699 15:05:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:49.699 [2024-07-22 15:05:09.234213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:51:49.699 [2024-07-22 15:05:09.234239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.699 [2024-07-22 15:05:09.234247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:51:49.699 [2024-07-22 15:05:09.234252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.699 [2024-07-22 15:05:09.234258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:51:49.699 [2024-07-22 15:05:09.234263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.699 [2024-07-22 15:05:09.234270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:51:49.699 [2024-07-22 15:05:09.234275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.699 [2024-07-22 15:05:09.234281] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db710 is same with the state(5) to be set 00:51:49.699 15:05:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:51:49.699 15:05:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:49.699 15:05:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:49.699 [2024-07-22 15:05:09.240944] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:51:49.699 [2024-07-22 15:05:09.240978] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:51:49.699 [2024-07-22 15:05:09.242558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:51:49.699 [2024-07-22 15:05:09.242578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.699 [2024-07-22 15:05:09.242585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:51:49.699 [2024-07-22 15:05:09.242591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.699 [2024-07-22 15:05:09.242596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:51:49.699 [2024-07-22 15:05:09.242602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.699 [2024-07-22 15:05:09.242608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:51:49.699 [2024-07-22 15:05:09.242613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:51:49.699 [2024-07-22 15:05:09.242617] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b7640 is same with the state(5) to be set 00:51:49.699 [2024-07-22 15:05:09.244162] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11db710 (9): Bad file descriptor 00:51:49.699 15:05:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:49.699 15:05:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # sleep 1 00:51:49.699 [2024-07-22 15:05:09.252517] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b7640 (9): Bad file descriptor 00:51:49.699 [2024-07-22 15:05:09.254157] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:51:49.699 [2024-07-22 15:05:09.254239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:51:49.699 [2024-07-22 15:05:09.254251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11db710 with addr=10.0.0.2, port=4420 00:51:49.699 [2024-07-22 15:05:09.254258] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db710 is same with the state(5) to be set 00:51:49.699 [2024-07-22 15:05:09.254267] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11db710 (9): Bad file descriptor 00:51:49.699 [2024-07-22 15:05:09.254276] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:51:49.699 [2024-07-22 15:05:09.254281] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:51:49.699 [2024-07-22 15:05:09.254288] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:51:49.699 [2024-07-22 15:05:09.254299] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:51:49.699 [2024-07-22 15:05:09.262505] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:51:49.699 [2024-07-22 15:05:09.262574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:51:49.699 [2024-07-22 15:05:09.262584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b7640 with addr=10.0.0.3, port=4420 00:51:49.699 [2024-07-22 15:05:09.262590] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b7640 is same with the state(5) to be set 00:51:49.699 [2024-07-22 15:05:09.262598] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b7640 (9): Bad file descriptor 00:51:49.699 [2024-07-22 15:05:09.262606] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:51:49.699 [2024-07-22 15:05:09.262610] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:51:49.699 [2024-07-22 15:05:09.262616] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:51:49.700 [2024-07-22 15:05:09.262630] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:51:49.700 [2024-07-22 15:05:09.264178] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:51:49.700 [2024-07-22 15:05:09.264235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:51:49.700 [2024-07-22 15:05:09.264244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11db710 with addr=10.0.0.2, port=4420 00:51:49.700 [2024-07-22 15:05:09.264249] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db710 is same with the state(5) to be set 00:51:49.700 [2024-07-22 15:05:09.264258] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11db710 (9): Bad file descriptor 00:51:49.700 [2024-07-22 15:05:09.264265] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:51:49.700 [2024-07-22 15:05:09.264270] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:51:49.700 [2024-07-22 15:05:09.264275] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:51:49.700 [2024-07-22 15:05:09.264282] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:51:49.700 [2024-07-22 15:05:09.272519] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:51:49.700 [2024-07-22 15:05:09.272568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:51:49.700 [2024-07-22 15:05:09.272578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b7640 with addr=10.0.0.3, port=4420 00:51:49.700 [2024-07-22 15:05:09.272583] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b7640 is same with the state(5) to be set 00:51:49.700 [2024-07-22 15:05:09.272591] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b7640 (9): Bad file descriptor 00:51:49.700 [2024-07-22 15:05:09.272604] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:51:49.700 [2024-07-22 15:05:09.272609] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:51:49.700 [2024-07-22 15:05:09.272614] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:51:49.700 [2024-07-22 15:05:09.272622] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:51:49.700 [2024-07-22 15:05:09.274187] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:51:49.700 [2024-07-22 15:05:09.274229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:51:49.700 [2024-07-22 15:05:09.274238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11db710 with addr=10.0.0.2, port=4420 00:51:49.700 [2024-07-22 15:05:09.274243] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db710 is same with the state(5) to be set 00:51:49.700 [2024-07-22 15:05:09.274251] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11db710 (9): Bad file descriptor 00:51:49.700 [2024-07-22 15:05:09.274258] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:51:49.700 [2024-07-22 15:05:09.274262] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:51:49.700 [2024-07-22 15:05:09.274267] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:51:49.700 [2024-07-22 15:05:09.274275] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:51:49.700 [2024-07-22 15:05:09.282532] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:51:49.700 [2024-07-22 15:05:09.282581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:51:49.700 [2024-07-22 15:05:09.282590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b7640 with addr=10.0.0.3, port=4420 00:51:49.700 [2024-07-22 15:05:09.282595] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b7640 is same with the state(5) to be set 00:51:49.700 [2024-07-22 15:05:09.282603] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b7640 (9): Bad file descriptor 00:51:49.700 [2024-07-22 15:05:09.282615] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:51:49.700 [2024-07-22 15:05:09.282620] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:51:49.700 [2024-07-22 15:05:09.282625] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:51:49.700 [2024-07-22 15:05:09.282633] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:51:49.700 [2024-07-22 15:05:09.284196] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:51:49.700 [2024-07-22 15:05:09.284236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:51:49.700 [2024-07-22 15:05:09.284245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11db710 with addr=10.0.0.2, port=4420 00:51:49.700 [2024-07-22 15:05:09.284250] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db710 is same with the state(5) to be set 00:51:49.700 [2024-07-22 15:05:09.284259] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11db710 (9): Bad file descriptor 00:51:49.700 [2024-07-22 15:05:09.284266] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:51:49.700 [2024-07-22 15:05:09.284270] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:51:49.700 [2024-07-22 15:05:09.284275] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:51:49.700 [2024-07-22 15:05:09.284283] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:51:49.700 [2024-07-22 15:05:09.292547] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:51:49.700 [2024-07-22 15:05:09.292628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:51:49.700 [2024-07-22 15:05:09.292638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b7640 with addr=10.0.0.3, port=4420 00:51:49.700 [2024-07-22 15:05:09.292651] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b7640 is same with the state(5) to be set 00:51:49.700 [2024-07-22 15:05:09.292660] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b7640 (9): Bad file descriptor 00:51:49.700 [2024-07-22 15:05:09.292668] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:51:49.700 [2024-07-22 15:05:09.292674] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:51:49.700 [2024-07-22 15:05:09.292679] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:51:49.700 [2024-07-22 15:05:09.292694] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:51:49.700 [2024-07-22 15:05:09.294205] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:51:49.700 [2024-07-22 15:05:09.294247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:51:49.700 [2024-07-22 15:05:09.294257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11db710 with addr=10.0.0.2, port=4420 00:51:49.700 [2024-07-22 15:05:09.294262] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db710 is same with the state(5) to be set 00:51:49.700 [2024-07-22 15:05:09.294270] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11db710 (9): Bad file descriptor 00:51:49.700 [2024-07-22 15:05:09.294278] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:51:49.700 [2024-07-22 15:05:09.294282] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:51:49.700 [2024-07-22 15:05:09.294287] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:51:49.700 [2024-07-22 15:05:09.294295] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:51:49.700 [2024-07-22 15:05:09.302565] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:51:49.700 [2024-07-22 15:05:09.302625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:51:49.700 [2024-07-22 15:05:09.302634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b7640 with addr=10.0.0.3, port=4420 00:51:49.700 [2024-07-22 15:05:09.302639] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b7640 is same with the state(5) to be set 00:51:49.700 [2024-07-22 15:05:09.302653] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b7640 (9): Bad file descriptor 00:51:49.700 [2024-07-22 15:05:09.302661] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:51:49.700 [2024-07-22 15:05:09.302665] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:51:49.700 [2024-07-22 15:05:09.302670] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:51:49.700 [2024-07-22 15:05:09.302680] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:51:49.700 [2024-07-22 15:05:09.304214] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:51:49.700 [2024-07-22 15:05:09.304255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:51:49.700 [2024-07-22 15:05:09.304263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11db710 with addr=10.0.0.2, port=4420 00:51:49.700 [2024-07-22 15:05:09.304268] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db710 is same with the state(5) to be set 00:51:49.700 [2024-07-22 15:05:09.304276] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11db710 (9): Bad file descriptor 00:51:49.700 [2024-07-22 15:05:09.304284] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:51:49.700 [2024-07-22 15:05:09.304288] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:51:49.700 [2024-07-22 15:05:09.304293] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:51:49.700 [2024-07-22 15:05:09.304301] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:51:49.700 [2024-07-22 15:05:09.312574] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:51:49.700 [2024-07-22 15:05:09.312623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:51:49.700 [2024-07-22 15:05:09.312648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b7640 with addr=10.0.0.3, port=4420 00:51:49.700 [2024-07-22 15:05:09.312653] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b7640 is same with the state(5) to be set 00:51:49.700 [2024-07-22 15:05:09.312661] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b7640 (9): Bad file descriptor 00:51:49.700 [2024-07-22 15:05:09.312669] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:51:49.700 [2024-07-22 15:05:09.312674] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:51:49.700 [2024-07-22 15:05:09.312679] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:51:49.700 [2024-07-22 15:05:09.312693] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:51:49.700 [2024-07-22 15:05:09.314221] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:51:49.700 [2024-07-22 15:05:09.314262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:51:49.700 [2024-07-22 15:05:09.314271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11db710 with addr=10.0.0.2, port=4420 00:51:49.701 [2024-07-22 15:05:09.314276] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db710 is same with the state(5) to be set 00:51:49.701 [2024-07-22 15:05:09.314284] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11db710 (9): Bad file descriptor 00:51:49.701 [2024-07-22 15:05:09.314291] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:51:49.701 [2024-07-22 15:05:09.314296] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:51:49.701 [2024-07-22 15:05:09.314301] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:51:49.701 [2024-07-22 15:05:09.314308] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:51:49.701 [2024-07-22 15:05:09.322584] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:51:49.701 [2024-07-22 15:05:09.322630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:51:49.701 [2024-07-22 15:05:09.322640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b7640 with addr=10.0.0.3, port=4420 00:51:49.701 [2024-07-22 15:05:09.322645] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b7640 is same with the state(5) to be set 00:51:49.701 [2024-07-22 15:05:09.322653] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b7640 (9): Bad file descriptor 00:51:49.701 [2024-07-22 15:05:09.322661] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:51:49.701 [2024-07-22 15:05:09.322665] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:51:49.701 [2024-07-22 15:05:09.322677] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:51:49.701 [2024-07-22 15:05:09.322686] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:51:49.701 [2024-07-22 15:05:09.324229] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:51:49.701 [2024-07-22 15:05:09.324270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:51:49.701 [2024-07-22 15:05:09.324279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11db710 with addr=10.0.0.2, port=4420 00:51:49.701 [2024-07-22 15:05:09.324284] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db710 is same with the state(5) to be set 00:51:49.701 [2024-07-22 15:05:09.324292] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11db710 (9): Bad file descriptor 00:51:49.701 [2024-07-22 15:05:09.324299] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:51:49.701 [2024-07-22 15:05:09.324304] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:51:49.701 [2024-07-22 15:05:09.324309] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:51:49.701 [2024-07-22 15:05:09.324316] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:51:49.963 [2024-07-22 15:05:09.332603] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:51:49.963 [2024-07-22 15:05:09.332673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:51:49.963 [2024-07-22 15:05:09.332690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b7640 with addr=10.0.0.3, port=4420 00:51:49.963 [2024-07-22 15:05:09.332696] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b7640 is same with the state(5) to be set 00:51:49.963 [2024-07-22 15:05:09.332705] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b7640 (9): Bad file descriptor 00:51:49.963 [2024-07-22 15:05:09.332712] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:51:49.963 [2024-07-22 15:05:09.332717] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:51:49.963 [2024-07-22 15:05:09.332722] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:51:49.963 [2024-07-22 15:05:09.332730] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:51:49.963 [2024-07-22 15:05:09.334238] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:51:49.963 [2024-07-22 15:05:09.334281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:51:49.963 [2024-07-22 15:05:09.334290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11db710 with addr=10.0.0.2, port=4420 00:51:49.963 [2024-07-22 15:05:09.334296] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db710 is same with the state(5) to be set 00:51:49.963 [2024-07-22 15:05:09.334304] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11db710 (9): Bad file descriptor 00:51:49.963 [2024-07-22 15:05:09.334311] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:51:49.963 [2024-07-22 15:05:09.334315] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:51:49.963 [2024-07-22 15:05:09.334320] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:51:49.963 [2024-07-22 15:05:09.334328] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:51:49.963 [2024-07-22 15:05:09.342620] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:51:49.963 [2024-07-22 15:05:09.342664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:51:49.963 [2024-07-22 15:05:09.342695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b7640 with addr=10.0.0.3, port=4420 00:51:49.963 [2024-07-22 15:05:09.342701] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b7640 is same with the state(5) to be set 00:51:49.963 [2024-07-22 15:05:09.342709] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b7640 (9): Bad file descriptor 00:51:49.963 [2024-07-22 15:05:09.342716] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:51:49.963 [2024-07-22 15:05:09.342721] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:51:49.963 [2024-07-22 15:05:09.342726] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:51:49.963 [2024-07-22 15:05:09.342734] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:51:49.963 [2024-07-22 15:05:09.344250] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:51:49.963 [2024-07-22 15:05:09.344293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:51:49.963 [2024-07-22 15:05:09.344301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11db710 with addr=10.0.0.2, port=4420 00:51:49.963 [2024-07-22 15:05:09.344306] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db710 is same with the state(5) to be set 00:51:49.963 [2024-07-22 15:05:09.344314] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11db710 (9): Bad file descriptor 00:51:49.963 [2024-07-22 15:05:09.344322] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:51:49.963 [2024-07-22 15:05:09.344327] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:51:49.963 [2024-07-22 15:05:09.344332] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:51:49.963 [2024-07-22 15:05:09.344340] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:51:49.963 [2024-07-22 15:05:09.352630] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:51:49.963 [2024-07-22 15:05:09.352696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:51:49.963 [2024-07-22 15:05:09.352706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b7640 with addr=10.0.0.3, port=4420 00:51:49.963 [2024-07-22 15:05:09.352711] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b7640 is same with the state(5) to be set 00:51:49.963 [2024-07-22 15:05:09.352719] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b7640 (9): Bad file descriptor 00:51:49.963 [2024-07-22 15:05:09.352726] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:51:49.963 [2024-07-22 15:05:09.352731] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:51:49.963 [2024-07-22 15:05:09.352736] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:51:49.963 [2024-07-22 15:05:09.352744] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:51:49.963 [2024-07-22 15:05:09.354266] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:51:49.963 [2024-07-22 15:05:09.354337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:51:49.963 [2024-07-22 15:05:09.354346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11db710 with addr=10.0.0.2, port=4420 00:51:49.963 [2024-07-22 15:05:09.354352] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db710 is same with the state(5) to be set 00:51:49.963 [2024-07-22 15:05:09.354359] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11db710 (9): Bad file descriptor 00:51:49.963 [2024-07-22 15:05:09.354366] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:51:49.963 [2024-07-22 15:05:09.354371] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:51:49.963 [2024-07-22 15:05:09.354376] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:51:49.963 [2024-07-22 15:05:09.354384] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:51:49.963 [2024-07-22 15:05:09.362640] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:51:49.963 [2024-07-22 15:05:09.362693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:51:49.963 [2024-07-22 15:05:09.362718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b7640 with addr=10.0.0.3, port=4420 00:51:49.963 [2024-07-22 15:05:09.362724] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b7640 is same with the state(5) to be set 00:51:49.963 [2024-07-22 15:05:09.362732] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b7640 (9): Bad file descriptor 00:51:49.963 [2024-07-22 15:05:09.362740] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:51:49.963 [2024-07-22 15:05:09.362744] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:51:49.963 [2024-07-22 15:05:09.362749] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:51:49.963 [2024-07-22 15:05:09.362757] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:51:49.963 [2024-07-22 15:05:09.364278] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:51:49.963 [2024-07-22 15:05:09.364320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:51:49.963 [2024-07-22 15:05:09.364329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11db710 with addr=10.0.0.2, port=4420 00:51:49.963 [2024-07-22 15:05:09.364334] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11db710 is same with the state(5) to be set 00:51:49.963 [2024-07-22 15:05:09.364343] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11db710 (9): Bad file descriptor 00:51:49.963 [2024-07-22 15:05:09.364350] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:51:49.963 [2024-07-22 15:05:09.364355] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:51:49.963 [2024-07-22 15:05:09.364360] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:51:49.964 [2024-07-22 15:05:09.364367] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:51:49.964 [2024-07-22 15:05:09.371784] bdev_nvme.c:6771:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:51:49.964 [2024-07-22 15:05:09.371804] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:51:49.964 [2024-07-22 15:05:09.371817] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:51:49.964 [2024-07-22 15:05:09.371834] bdev_nvme.c:6771:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:51:49.964 [2024-07-22 15:05:09.371844] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:51:49.964 [2024-07-22 15:05:09.371851] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:51:49.964 [2024-07-22 15:05:09.458695] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:51:49.964 [2024-07-22 15:05:09.458747] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:51:50.903 15:05:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:51:50.903 15:05:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:51:50.903 15:05:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:51:50.903 15:05:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:50.903 15:05:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:50.903 15:05:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:51:50.903 15:05:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:51:50.903 15:05:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:50.903 15:05:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:51:50.903 15:05:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:51:50.903 15:05:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:51:50.903 15:05:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:51:50.903 15:05:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:51:50.903 15:05:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:51:50.903 15:05:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:50.903 15:05:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:50.903 15:05:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:50.903 15:05:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:51:50.903 15:05:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:51:50.903 15:05:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:51:50.903 15:05:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:51:50.903 15:05:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:50.903 15:05:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:50.903 15:05:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:51:50.903 15:05:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:51:50.903 15:05:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:50.904 15:05:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:51:50.904 15:05:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:51:50.904 15:05:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:51:50.904 15:05:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:51:50.904 15:05:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:50.904 15:05:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:51:50.904 15:05:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:51:50.904 15:05:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:50.904 15:05:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:50.904 15:05:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:51:50.904 15:05:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 00:51:50.904 15:05:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:51:50.904 15:05:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:51:50.904 15:05:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:50.904 15:05:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:50.904 15:05:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:50.904 15:05:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:51:50.904 15:05:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:51:50.904 15:05:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:51:50.904 15:05:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:51:50.904 15:05:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:50.904 15:05:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:50.904 15:05:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:50.904 15:05:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # sleep 1 00:51:50.904 [2024-07-22 15:05:10.518699] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:51:51.935 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:51:51.935 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:51:51.935 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:51:51.935 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:51.935 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:51.935 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:51:51.935 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:51:51.935 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:52.195 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:51:52.195 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:51:52.195 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:51:52.195 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:51:52.195 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:51:52.195 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:51:52.195 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:52.195 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:52.195 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:52.195 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:51:52.195 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:51:52.195 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:51:52.195 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:51:52.195 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:52.195 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:52.195 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:51:52.195 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:51:52.195 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:52.195 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:51:52.195 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 00:51:52.195 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:51:52.195 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:51:52.195 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:52.195 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:52.195 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:52.195 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=4 00:51:52.195 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=8 00:51:52.195 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:51:52.195 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:51:52.195 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:52.195 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:52.195 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:52.195 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:51:52.195 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 00:51:52.195 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:51:52.195 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:51:52.195 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:51:52.195 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:51:52.195 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:51:52.195 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:51:52.195 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:52.195 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:52.195 [2024-07-22 15:05:11.746453] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:51:52.195 2024/07/22 15:05:11 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:51:52.195 request: 00:51:52.195 { 00:51:52.195 "method": "bdev_nvme_start_mdns_discovery", 00:51:52.195 "params": { 00:51:52.195 "name": "mdns", 00:51:52.195 "svcname": "_nvme-disc._http", 00:51:52.195 "hostnqn": "nqn.2021-12.io.spdk:test" 00:51:52.195 } 00:51:52.195 } 00:51:52.195 Got JSON-RPC error response 00:51:52.195 GoRPCClient: error on JSON-RPC call 00:51:52.195 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:51:52.195 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 00:51:52.195 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:51:52.195 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:51:52.195 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:51:52.195 15:05:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # sleep 5 00:51:52.762 [2024-07-22 15:05:12.330083] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:51:53.022 [2024-07-22 15:05:12.429887] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:51:53.022 [2024-07-22 15:05:12.529704] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:51:53.022 [2024-07-22 15:05:12.529724] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:51:53.022 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:51:53.022 cookie is 0 00:51:53.022 is_local: 1 00:51:53.022 our_own: 0 00:51:53.022 wide_area: 0 00:51:53.022 multicast: 1 00:51:53.022 cached: 1 00:51:53.022 [2024-07-22 15:05:12.629509] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:51:53.022 [2024-07-22 15:05:12.629526] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:51:53.022 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" "nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:51:53.022 cookie is 0 00:51:53.022 is_local: 1 00:51:53.022 our_own: 0 00:51:53.022 wide_area: 0 00:51:53.022 multicast: 1 00:51:53.022 cached: 1 00:51:53.022 [2024-07-22 15:05:12.629533] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:51:53.282 [2024-07-22 15:05:12.729314] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:51:53.282 [2024-07-22 15:05:12.729331] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:51:53.282 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:51:53.282 cookie is 0 00:51:53.282 is_local: 1 00:51:53.282 our_own: 0 00:51:53.282 wide_area: 0 00:51:53.282 multicast: 1 00:51:53.282 cached: 1 00:51:53.282 [2024-07-22 15:05:12.829121] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:51:53.282 [2024-07-22 15:05:12.829135] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:51:53.282 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" "nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:51:53.282 cookie is 0 00:51:53.282 is_local: 1 00:51:53.282 our_own: 0 00:51:53.282 wide_area: 0 00:51:53.282 multicast: 1 00:51:53.282 cached: 1 00:51:53.282 [2024-07-22 15:05:12.829140] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:51:54.216 [2024-07-22 15:05:13.535870] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:51:54.216 [2024-07-22 15:05:13.535894] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:51:54.216 [2024-07-22 15:05:13.535906] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:51:54.216 [2024-07-22 15:05:13.621783] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:51:54.216 [2024-07-22 15:05:13.680388] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:51:54.216 [2024-07-22 15:05:13.680413] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:51:54.216 [2024-07-22 15:05:13.735354] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:51:54.216 [2024-07-22 15:05:13.735373] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:51:54.216 [2024-07-22 15:05:13.735395] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:51:54.216 [2024-07-22 15:05:13.821267] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:51:54.475 [2024-07-22 15:05:13.879634] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:51:54.475 [2024-07-22 15:05:13.879656] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:51:57.765 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:51:57.765 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:51:57.765 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:51:57.765 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:57.765 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:51:57.765 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:57.765 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:51:57.765 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:57.765 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:51:57.765 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:51:57.765 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:51:57.765 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:51:57.765 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:57.765 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:57.765 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:51:57.765 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:51:57.765 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:57.765 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:51:57.765 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:51:57.765 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:51:57.765 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:51:57.765 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:57.765 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:57.765 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:51:57.765 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:51:57.765 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:57.765 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:51:57.765 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:51:57.765 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 00:51:57.765 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:51:57.765 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:51:57.765 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:51:57.765 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:51:57.766 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:51:57.766 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:51:57.766 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:57.766 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:57.766 [2024-07-22 15:05:16.909257] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:51:57.766 2024/07/22 15:05:16 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:51:57.766 request: 00:51:57.766 { 00:51:57.766 "method": "bdev_nvme_start_mdns_discovery", 00:51:57.766 "params": { 00:51:57.766 "name": "cdc", 00:51:57.766 "svcname": "_nvme-disc._tcp", 00:51:57.766 "hostnqn": "nqn.2021-12.io.spdk:test" 00:51:57.766 } 00:51:57.766 } 00:51:57.766 Got JSON-RPC error response 00:51:57.766 GoRPCClient: error on JSON-RPC call 00:51:57.766 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:51:57.766 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 00:51:57.766 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:51:57.766 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:51:57.766 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:51:57.766 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:51:57.766 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:51:57.766 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:51:57.766 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:57.766 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:51:57.766 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:57.766 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:51:57.766 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:57.766 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:51:57.766 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:51:57.766 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:51:57.766 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:51:57.766 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:57.766 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:51:57.766 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:57.766 15:05:16 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:51:57.766 15:05:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:57.766 15:05:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:51:57.766 15:05:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:51:57.766 15:05:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:57.766 15:05:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:57.766 15:05:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:57.766 15:05:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_stop_mdns_prr 00:51:57.766 15:05:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:51:57.766 15:05:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:57.766 15:05:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:51:57.766 15:05:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # trap - SIGINT SIGTERM EXIT 00:51:57.766 15:05:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # kill 111991 00:51:57.766 15:05:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # wait 111991 00:51:57.766 [2024-07-22 15:05:17.122409] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:51:57.766 15:05:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # kill 112020 00:51:57.766 Got SIGTERM, quitting. 00:51:57.766 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:51:57.766 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:51:57.766 15:05:17 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # nvmftestfini 00:51:57.766 15:05:17 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:51:57.766 15:05:17 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@117 -- # sync 00:51:57.766 avahi-daemon 0.8 exiting. 00:51:57.766 15:05:17 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:51:57.766 15:05:17 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@120 -- # set +e 00:51:57.766 15:05:17 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:51:57.766 15:05:17 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:51:57.766 rmmod nvme_tcp 00:51:57.766 rmmod nvme_fabrics 00:51:57.766 rmmod nvme_keyring 00:51:57.766 15:05:17 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:51:57.766 15:05:17 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set -e 00:51:57.766 15:05:17 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # return 0 00:51:57.766 15:05:17 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@489 -- # '[' -n 111939 ']' 00:51:57.766 15:05:17 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@490 -- # killprocess 111939 00:51:57.766 15:05:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@946 -- # '[' -z 111939 ']' 00:51:57.766 15:05:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@950 -- # kill -0 111939 00:51:57.766 15:05:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@951 -- # uname 00:51:57.766 15:05:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:51:57.766 15:05:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 111939 00:51:57.766 15:05:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:51:57.766 15:05:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:51:57.766 killing process with pid 111939 00:51:57.766 15:05:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 111939' 00:51:57.766 15:05:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@965 -- # kill 111939 00:51:57.766 15:05:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@970 -- # wait 111939 00:51:58.026 15:05:17 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:51:58.026 15:05:17 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:51:58.026 15:05:17 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:51:58.026 15:05:17 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:51:58.026 15:05:17 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:51:58.026 15:05:17 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:51:58.026 15:05:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:51:58.026 15:05:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:51:58.026 15:05:17 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:51:58.026 00:51:58.026 real 0m20.294s 00:51:58.026 user 0m39.644s 00:51:58.026 sys 0m1.985s 00:51:58.026 15:05:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:51:58.026 15:05:17 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:51:58.026 ************************************ 00:51:58.026 END TEST nvmf_mdns_discovery 00:51:58.026 ************************************ 00:51:58.285 15:05:17 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 1 -eq 1 ]] 00:51:58.285 15:05:17 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:51:58.285 15:05:17 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:51:58.285 15:05:17 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:51:58.285 15:05:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:51:58.285 ************************************ 00:51:58.285 START TEST nvmf_host_multipath 00:51:58.285 ************************************ 00:51:58.285 15:05:17 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:51:58.285 * Looking for test storage... 00:51:58.285 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:51:58.285 15:05:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:51:58.285 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:51:58.285 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:51:58.285 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:51:58.285 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:51:58.285 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:51:58.285 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:51:58.285 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:51:58.285 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:51:58.285 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:51:58.285 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:51:58.285 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:51:58.285 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:51:58.285 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:51:58.285 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:51:58.285 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:51:58.285 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:51:58.285 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:51:58.285 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:51:58.285 15:05:17 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:51:58.285 15:05:17 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:51:58.285 15:05:17 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:51:58.285 15:05:17 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:58.285 15:05:17 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:58.285 15:05:17 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:58.285 15:05:17 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:51:58.285 15:05:17 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:51:58.285 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:51:58.285 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:51:58.285 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:51:58.285 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:51:58.285 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:51:58.285 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:51:58.285 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:51:58.285 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:51:58.285 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:51:58.285 15:05:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:51:58.285 15:05:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:51:58.285 15:05:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:51:58.285 15:05:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:51:58.285 15:05:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:51:58.285 15:05:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:51:58.285 15:05:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:51:58.285 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:51:58.285 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:51:58.285 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:51:58.285 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:51:58.285 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:51:58.285 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:51:58.286 15:05:17 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:51:58.286 15:05:17 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:51:58.286 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:51:58.286 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:51:58.286 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:51:58.286 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:51:58.286 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:51:58.286 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:51:58.286 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:51:58.286 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:51:58.286 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:51:58.286 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:51:58.286 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:51:58.286 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:51:58.286 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:51:58.286 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:51:58.286 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:51:58.286 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:51:58.286 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:51:58.286 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:51:58.286 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:51:58.286 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:51:58.286 Cannot find device "nvmf_tgt_br" 00:51:58.286 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:51:58.286 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:51:58.286 Cannot find device "nvmf_tgt_br2" 00:51:58.286 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:51:58.286 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:51:58.286 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:51:58.546 Cannot find device "nvmf_tgt_br" 00:51:58.546 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:51:58.546 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:51:58.546 Cannot find device "nvmf_tgt_br2" 00:51:58.546 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:51:58.546 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:51:58.546 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:51:58.546 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:51:58.546 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:51:58.546 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:51:58.546 15:05:17 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:51:58.546 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:51:58.546 15:05:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:51:58.546 15:05:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:51:58.546 15:05:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:51:58.546 15:05:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:51:58.546 15:05:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:51:58.546 15:05:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:51:58.546 15:05:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:51:58.546 15:05:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:51:58.546 15:05:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:51:58.546 15:05:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:51:58.546 15:05:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:51:58.546 15:05:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:51:58.546 15:05:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:51:58.546 15:05:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:51:58.546 15:05:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:51:58.546 15:05:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:51:58.546 15:05:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:51:58.546 15:05:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:51:58.546 15:05:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:51:58.546 15:05:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:51:58.546 15:05:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:51:58.546 15:05:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:51:58.546 15:05:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:51:58.546 15:05:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:51:58.546 15:05:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:51:58.546 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:51:58.546 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:51:58.546 00:51:58.546 --- 10.0.0.2 ping statistics --- 00:51:58.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:51:58.546 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:51:58.546 15:05:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:51:58.546 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:51:58.546 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:51:58.546 00:51:58.546 --- 10.0.0.3 ping statistics --- 00:51:58.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:51:58.546 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:51:58.546 15:05:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:51:58.546 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:51:58.546 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:51:58.546 00:51:58.546 --- 10.0.0.1 ping statistics --- 00:51:58.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:51:58.546 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:51:58.546 15:05:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:51:58.546 15:05:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:51:58.546 15:05:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:51:58.546 15:05:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:51:58.546 15:05:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:51:58.546 15:05:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:51:58.546 15:05:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:51:58.546 15:05:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:51:58.546 15:05:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:51:58.806 15:05:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:51:58.806 15:05:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:51:58.806 15:05:18 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@720 -- # xtrace_disable 00:51:58.806 15:05:18 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:51:58.806 15:05:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=112585 00:51:58.806 15:05:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:51:58.806 15:05:18 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 112585 00:51:58.806 15:05:18 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@827 -- # '[' -z 112585 ']' 00:51:58.806 15:05:18 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:51:58.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:51:58.806 15:05:18 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@832 -- # local max_retries=100 00:51:58.806 15:05:18 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:51:58.806 15:05:18 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # xtrace_disable 00:51:58.806 15:05:18 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:51:58.806 [2024-07-22 15:05:18.247878] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:51:58.806 [2024-07-22 15:05:18.248014] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:51:58.806 [2024-07-22 15:05:18.388946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:51:58.806 [2024-07-22 15:05:18.433070] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:51:58.806 [2024-07-22 15:05:18.433126] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:51:58.806 [2024-07-22 15:05:18.433131] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:51:58.806 [2024-07-22 15:05:18.433136] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:51:58.806 [2024-07-22 15:05:18.433140] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:51:58.806 [2024-07-22 15:05:18.433356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:51:58.806 [2024-07-22 15:05:18.433356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:51:59.744 15:05:19 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:51:59.744 15:05:19 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@860 -- # return 0 00:51:59.744 15:05:19 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:51:59.744 15:05:19 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:51:59.744 15:05:19 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:51:59.744 15:05:19 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:51:59.744 15:05:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=112585 00:51:59.744 15:05:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:51:59.744 [2024-07-22 15:05:19.289957] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:51:59.744 15:05:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:52:00.003 Malloc0 00:52:00.003 15:05:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:52:00.262 15:05:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:52:00.262 15:05:19 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:52:00.521 [2024-07-22 15:05:20.042519] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:52:00.521 15:05:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:52:00.781 [2024-07-22 15:05:20.226252] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:52:00.781 15:05:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=112679 00:52:00.781 15:05:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:52:00.781 15:05:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:52:00.781 15:05:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 112679 /var/tmp/bdevperf.sock 00:52:00.781 15:05:20 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@827 -- # '[' -z 112679 ']' 00:52:00.781 15:05:20 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:52:00.781 15:05:20 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@832 -- # local max_retries=100 00:52:00.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:52:00.781 15:05:20 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:52:00.781 15:05:20 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # xtrace_disable 00:52:00.781 15:05:20 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:52:01.719 15:05:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:52:01.719 15:05:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@860 -- # return 0 00:52:01.719 15:05:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:52:01.719 15:05:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:52:01.978 Nvme0n1 00:52:02.238 15:05:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:52:02.497 Nvme0n1 00:52:02.497 15:05:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:52:02.497 15:05:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:52:03.461 15:05:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:52:03.461 15:05:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:52:03.721 15:05:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:52:03.721 15:05:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:52:03.721 15:05:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 112585 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:52:03.721 15:05:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=112766 00:52:03.721 15:05:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:52:10.294 15:05:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:52:10.294 15:05:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:52:10.294 15:05:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:52:10.294 15:05:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:52:10.294 Attaching 4 probes... 00:52:10.294 @path[10.0.0.2, 4421]: 23460 00:52:10.294 @path[10.0.0.2, 4421]: 23956 00:52:10.294 @path[10.0.0.2, 4421]: 23830 00:52:10.294 @path[10.0.0.2, 4421]: 23853 00:52:10.294 @path[10.0.0.2, 4421]: 22848 00:52:10.294 15:05:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:52:10.294 15:05:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:52:10.294 15:05:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:52:10.294 15:05:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:52:10.294 15:05:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:52:10.294 15:05:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:52:10.294 15:05:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 112766 00:52:10.294 15:05:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:52:10.294 15:05:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:52:10.294 15:05:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:52:10.294 15:05:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:52:10.294 15:05:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:52:10.294 15:05:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=112897 00:52:10.294 15:05:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:52:10.294 15:05:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 112585 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:52:16.865 15:05:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:52:16.865 15:05:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:52:16.865 15:05:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:52:16.865 15:05:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:52:16.865 Attaching 4 probes... 00:52:16.865 @path[10.0.0.2, 4420]: 21808 00:52:16.865 @path[10.0.0.2, 4420]: 19925 00:52:16.865 @path[10.0.0.2, 4420]: 19793 00:52:16.865 @path[10.0.0.2, 4420]: 22030 00:52:16.865 @path[10.0.0.2, 4420]: 24471 00:52:16.865 15:05:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:52:16.865 15:05:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:52:16.865 15:05:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:52:16.865 15:05:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:52:16.865 15:05:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:52:16.865 15:05:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:52:16.865 15:05:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 112897 00:52:16.865 15:05:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:52:16.865 15:05:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:52:16.865 15:05:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:52:16.865 15:05:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:52:16.865 15:05:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:52:16.865 15:05:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 112585 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:52:16.865 15:05:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=113027 00:52:16.865 15:05:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:52:23.437 15:05:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:52:23.437 15:05:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:52:23.437 15:05:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:52:23.437 15:05:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:52:23.437 Attaching 4 probes... 00:52:23.437 @path[10.0.0.2, 4421]: 15267 00:52:23.437 @path[10.0.0.2, 4421]: 23737 00:52:23.437 @path[10.0.0.2, 4421]: 22580 00:52:23.437 @path[10.0.0.2, 4421]: 23342 00:52:23.437 @path[10.0.0.2, 4421]: 23466 00:52:23.437 15:05:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:52:23.437 15:05:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:52:23.437 15:05:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:52:23.437 15:05:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:52:23.437 15:05:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:52:23.437 15:05:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:52:23.437 15:05:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 113027 00:52:23.437 15:05:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:52:23.437 15:05:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:52:23.437 15:05:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:52:23.437 15:05:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:52:23.696 15:05:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:52:23.697 15:05:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 112585 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:52:23.697 15:05:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=113159 00:52:23.697 15:05:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:52:30.282 15:05:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:52:30.282 15:05:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:52:30.282 15:05:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:52:30.282 15:05:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:52:30.282 Attaching 4 probes... 00:52:30.282 00:52:30.282 00:52:30.282 00:52:30.282 00:52:30.282 00:52:30.282 15:05:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:52:30.282 15:05:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:52:30.282 15:05:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:52:30.282 15:05:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:52:30.283 15:05:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:52:30.283 15:05:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:52:30.283 15:05:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 113159 00:52:30.283 15:05:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:52:30.283 15:05:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:52:30.283 15:05:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:52:30.283 15:05:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:52:30.283 15:05:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:52:30.283 15:05:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 112585 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:52:30.283 15:05:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=113291 00:52:30.283 15:05:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:52:36.862 15:05:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:52:36.862 15:05:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:52:36.862 15:05:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:52:36.862 15:05:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:52:36.862 Attaching 4 probes... 00:52:36.862 @path[10.0.0.2, 4421]: 22263 00:52:36.862 @path[10.0.0.2, 4421]: 23265 00:52:36.862 @path[10.0.0.2, 4421]: 23299 00:52:36.862 @path[10.0.0.2, 4421]: 23134 00:52:36.862 @path[10.0.0.2, 4421]: 23574 00:52:36.862 15:05:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:52:36.862 15:05:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:52:36.862 15:05:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:52:36.862 15:05:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:52:36.862 15:05:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:52:36.862 15:05:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:52:36.862 15:05:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 113291 00:52:36.862 15:05:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:52:36.862 15:05:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:52:36.862 [2024-07-22 15:05:56.108256] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.862 [2024-07-22 15:05:56.108317] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.862 [2024-07-22 15:05:56.108325] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.862 [2024-07-22 15:05:56.108331] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.862 [2024-07-22 15:05:56.108336] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.862 [2024-07-22 15:05:56.108342] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.862 [2024-07-22 15:05:56.108348] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.862 [2024-07-22 15:05:56.108353] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.862 [2024-07-22 15:05:56.108358] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.862 [2024-07-22 15:05:56.108363] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.862 [2024-07-22 15:05:56.108368] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.862 [2024-07-22 15:05:56.108373] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.862 [2024-07-22 15:05:56.108378] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.862 [2024-07-22 15:05:56.108384] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.862 [2024-07-22 15:05:56.108389] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.862 [2024-07-22 15:05:56.108395] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.862 [2024-07-22 15:05:56.108400] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.862 [2024-07-22 15:05:56.108405] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.862 [2024-07-22 15:05:56.108410] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.862 [2024-07-22 15:05:56.108415] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.862 [2024-07-22 15:05:56.108420] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.862 [2024-07-22 15:05:56.108425] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.862 [2024-07-22 15:05:56.108430] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.862 [2024-07-22 15:05:56.108435] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.862 [2024-07-22 15:05:56.108440] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108445] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108451] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108456] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108461] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108466] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108470] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108475] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108480] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108485] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108491] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108497] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108502] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108507] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108513] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108518] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108524] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108529] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108534] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108539] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108546] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108551] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108555] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108561] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108567] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108572] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108577] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108582] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108587] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108600] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108606] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108611] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108616] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108621] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108627] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108632] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108637] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108642] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108648] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108653] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108658] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108691] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108699] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108704] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108710] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108715] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108720] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108726] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108731] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108736] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108741] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108746] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108751] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108768] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108773] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108778] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108784] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108790] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108795] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108800] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108811] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 [2024-07-22 15:05:56.108817] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd9360 is same with the state(5) to be set 00:52:36.863 15:05:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:52:37.802 15:05:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:52:37.802 15:05:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=113422 00:52:37.802 15:05:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 112585 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:52:37.802 15:05:57 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:52:44.372 15:06:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:52:44.372 15:06:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:52:44.372 15:06:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:52:44.372 15:06:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:52:44.372 Attaching 4 probes... 00:52:44.372 @path[10.0.0.2, 4420]: 22682 00:52:44.372 @path[10.0.0.2, 4420]: 23026 00:52:44.372 @path[10.0.0.2, 4420]: 23328 00:52:44.372 @path[10.0.0.2, 4420]: 23431 00:52:44.372 @path[10.0.0.2, 4420]: 23429 00:52:44.372 15:06:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:52:44.372 15:06:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:52:44.372 15:06:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:52:44.372 15:06:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:52:44.372 15:06:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:52:44.372 15:06:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:52:44.372 15:06:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 113422 00:52:44.372 15:06:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:52:44.372 15:06:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:52:44.372 [2024-07-22 15:06:03.544341] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:52:44.372 15:06:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:52:44.372 15:06:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:52:50.944 15:06:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:52:50.944 15:06:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=113615 00:52:50.944 15:06:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 112585 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:52:50.944 15:06:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:52:56.220 15:06:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:52:56.220 15:06:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:52:56.479 15:06:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:52:56.479 15:06:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:52:56.479 Attaching 4 probes... 00:52:56.479 @path[10.0.0.2, 4421]: 22427 00:52:56.479 @path[10.0.0.2, 4421]: 23523 00:52:56.479 @path[10.0.0.2, 4421]: 23580 00:52:56.479 @path[10.0.0.2, 4421]: 23476 00:52:56.479 @path[10.0.0.2, 4421]: 23430 00:52:56.479 15:06:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:52:56.479 15:06:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:52:56.479 15:06:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:52:56.479 15:06:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:52:56.479 15:06:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:52:56.479 15:06:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:52:56.479 15:06:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 113615 00:52:56.479 15:06:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:52:56.479 15:06:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 112679 00:52:56.479 15:06:16 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@946 -- # '[' -z 112679 ']' 00:52:56.479 15:06:16 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@950 -- # kill -0 112679 00:52:56.479 15:06:16 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@951 -- # uname 00:52:56.479 15:06:16 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:52:56.479 15:06:16 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 112679 00:52:56.479 15:06:16 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:52:56.479 15:06:16 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:52:56.479 killing process with pid 112679 00:52:56.479 15:06:16 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@964 -- # echo 'killing process with pid 112679' 00:52:56.479 15:06:16 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@965 -- # kill 112679 00:52:56.479 15:06:16 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@970 -- # wait 112679 00:52:56.764 Connection closed with partial response: 00:52:56.764 00:52:56.764 00:52:56.764 15:06:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 112679 00:52:56.764 15:06:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:52:56.764 [2024-07-22 15:05:20.294032] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:52:56.764 [2024-07-22 15:05:20.294113] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112679 ] 00:52:56.764 [2024-07-22 15:05:20.431410] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:52:56.764 [2024-07-22 15:05:20.475523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:52:56.764 Running I/O for 90 seconds... 00:52:56.764 [2024-07-22 15:05:29.879814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.764 [2024-07-22 15:05:29.879877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:52:56.764 [2024-07-22 15:05:29.879904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.764 [2024-07-22 15:05:29.879913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:56.764 [2024-07-22 15:05:29.879928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.764 [2024-07-22 15:05:29.879937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:52:56.764 [2024-07-22 15:05:29.879951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.764 [2024-07-22 15:05:29.879959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:52:56.764 [2024-07-22 15:05:29.879973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.764 [2024-07-22 15:05:29.879982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:52:56.764 [2024-07-22 15:05:29.879995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.764 [2024-07-22 15:05:29.880003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:52:56.764 [2024-07-22 15:05:29.880017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.764 [2024-07-22 15:05:29.880025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:52:56.764 [2024-07-22 15:05:29.880039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.764 [2024-07-22 15:05:29.880047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:52:56.764 [2024-07-22 15:05:29.880061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.764 [2024-07-22 15:05:29.880069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:52:56.764 [2024-07-22 15:05:29.880083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.764 [2024-07-22 15:05:29.880091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:52:56.764 [2024-07-22 15:05:29.880105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.764 [2024-07-22 15:05:29.880130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:52:56.764 [2024-07-22 15:05:29.880144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.764 [2024-07-22 15:05:29.880153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:52:56.764 [2024-07-22 15:05:29.880166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.764 [2024-07-22 15:05:29.880174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:52:56.764 [2024-07-22 15:05:29.880188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.764 [2024-07-22 15:05:29.880197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:52:56.764 [2024-07-22 15:05:29.880211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.764 [2024-07-22 15:05:29.880219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:52:56.764 [2024-07-22 15:05:29.880233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.765 [2024-07-22 15:05:29.880241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:52:56.765 [2024-07-22 15:05:29.880255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.765 [2024-07-22 15:05:29.880264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:52:56.765 [2024-07-22 15:05:29.880279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.765 [2024-07-22 15:05:29.880288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:52:56.765 [2024-07-22 15:05:29.880302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.765 [2024-07-22 15:05:29.880310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:52:56.765 [2024-07-22 15:05:29.880324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.765 [2024-07-22 15:05:29.880333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:52:56.765 [2024-07-22 15:05:29.880347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.765 [2024-07-22 15:05:29.880355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:52:56.765 [2024-07-22 15:05:29.880369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.765 [2024-07-22 15:05:29.880377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:52:56.765 [2024-07-22 15:05:29.880391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.765 [2024-07-22 15:05:29.880400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:52:56.765 [2024-07-22 15:05:29.880419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.765 [2024-07-22 15:05:29.880427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:52:56.765 [2024-07-22 15:05:29.880441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.765 [2024-07-22 15:05:29.880450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:52:56.765 [2024-07-22 15:05:29.880464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.765 [2024-07-22 15:05:29.880473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:52:56.765 [2024-07-22 15:05:29.880487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.765 [2024-07-22 15:05:29.880496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:52:56.765 [2024-07-22 15:05:29.880510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.765 [2024-07-22 15:05:29.880519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:52:56.765 [2024-07-22 15:05:29.880534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.765 [2024-07-22 15:05:29.880542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:52:56.765 [2024-07-22 15:05:29.880556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.765 [2024-07-22 15:05:29.880565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:52:56.765 [2024-07-22 15:05:29.880578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.765 [2024-07-22 15:05:29.880587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:52:56.765 [2024-07-22 15:05:29.880608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.765 [2024-07-22 15:05:29.880617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:52:56.765 [2024-07-22 15:05:29.880631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.765 [2024-07-22 15:05:29.880640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:52:56.765 [2024-07-22 15:05:29.880654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.765 [2024-07-22 15:05:29.880663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:56.765 [2024-07-22 15:05:29.880686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.765 [2024-07-22 15:05:29.880695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:52:56.765 [2024-07-22 15:05:29.880714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.765 [2024-07-22 15:05:29.880723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:52:56.765 [2024-07-22 15:05:29.880737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.765 [2024-07-22 15:05:29.880746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:52:56.765 [2024-07-22 15:05:29.880760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.765 [2024-07-22 15:05:29.880769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:52:56.765 [2024-07-22 15:05:29.880784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.765 [2024-07-22 15:05:29.880792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:52:56.765 [2024-07-22 15:05:29.880806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.765 [2024-07-22 15:05:29.880815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:52:56.765 [2024-07-22 15:05:29.880829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.765 [2024-07-22 15:05:29.880838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:52:56.765 [2024-07-22 15:05:29.881238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.765 [2024-07-22 15:05:29.881256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:52:56.765 [2024-07-22 15:05:29.881274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.765 [2024-07-22 15:05:29.881283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:52:56.765 [2024-07-22 15:05:29.881297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.765 [2024-07-22 15:05:29.881306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:52:56.765 [2024-07-22 15:05:29.881320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.765 [2024-07-22 15:05:29.881329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:52:56.765 [2024-07-22 15:05:29.881343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.765 [2024-07-22 15:05:29.881351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:52:56.765 [2024-07-22 15:05:29.881365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.765 [2024-07-22 15:05:29.881374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:52:56.765 [2024-07-22 15:05:29.881388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.765 [2024-07-22 15:05:29.881404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:52:56.765 [2024-07-22 15:05:29.881418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.765 [2024-07-22 15:05:29.881427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:52:56.765 [2024-07-22 15:05:29.881441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.765 [2024-07-22 15:05:29.881450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:52:56.765 [2024-07-22 15:05:29.881464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.765 [2024-07-22 15:05:29.881473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:52:56.765 [2024-07-22 15:05:29.881488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.765 [2024-07-22 15:05:29.881497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:52:56.765 [2024-07-22 15:05:29.881511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.765 [2024-07-22 15:05:29.881520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:52:56.765 [2024-07-22 15:05:29.881534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.766 [2024-07-22 15:05:29.881543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:52:56.766 [2024-07-22 15:05:29.881557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.766 [2024-07-22 15:05:29.881566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:52:56.766 [2024-07-22 15:05:29.881580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.766 [2024-07-22 15:05:29.881588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:52:56.766 [2024-07-22 15:05:29.881602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.766 [2024-07-22 15:05:29.881611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:52:56.766 [2024-07-22 15:05:29.881625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.766 [2024-07-22 15:05:29.881634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:52:56.766 [2024-07-22 15:05:29.881647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.766 [2024-07-22 15:05:29.881656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:52:56.766 [2024-07-22 15:05:29.881680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.766 [2024-07-22 15:05:29.881693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:52:56.766 [2024-07-22 15:05:29.881707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.766 [2024-07-22 15:05:29.881716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:52:56.766 [2024-07-22 15:05:29.881730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.766 [2024-07-22 15:05:29.881739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:52:56.766 [2024-07-22 15:05:29.881753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.766 [2024-07-22 15:05:29.881761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:52:56.766 [2024-07-22 15:05:29.881775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.766 [2024-07-22 15:05:29.881785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:52:56.766 [2024-07-22 15:05:29.881798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.766 [2024-07-22 15:05:29.881807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:52:56.766 [2024-07-22 15:05:29.881821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.766 [2024-07-22 15:05:29.881830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:56.766 [2024-07-22 15:05:29.881844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.766 [2024-07-22 15:05:29.881853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:52:56.766 [2024-07-22 15:05:29.881867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.766 [2024-07-22 15:05:29.881876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:52:56.766 [2024-07-22 15:05:29.881890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.766 [2024-07-22 15:05:29.881899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:52:56.766 [2024-07-22 15:05:29.881913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.766 [2024-07-22 15:05:29.881921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:52:56.766 [2024-07-22 15:05:29.881935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.766 [2024-07-22 15:05:29.881944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:52:56.766 [2024-07-22 15:05:29.881957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.766 [2024-07-22 15:05:29.881966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:52:56.766 [2024-07-22 15:05:29.881984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.766 [2024-07-22 15:05:29.881993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:52:56.766 [2024-07-22 15:05:29.882006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.766 [2024-07-22 15:05:29.882017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:52:56.766 [2024-07-22 15:05:29.882031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.766 [2024-07-22 15:05:29.882040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:52:56.766 [2024-07-22 15:05:29.882053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.766 [2024-07-22 15:05:29.882062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:52:56.766 [2024-07-22 15:05:29.882076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.766 [2024-07-22 15:05:29.882085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:52:56.766 [2024-07-22 15:05:29.882098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.766 [2024-07-22 15:05:29.882107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:52:56.766 [2024-07-22 15:05:29.882121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.766 [2024-07-22 15:05:29.882130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:52:56.766 [2024-07-22 15:05:29.882144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.766 [2024-07-22 15:05:29.882153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:52:56.766 [2024-07-22 15:05:29.882166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.766 [2024-07-22 15:05:29.882175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:52:56.766 [2024-07-22 15:05:29.882193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.766 [2024-07-22 15:05:29.882201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:52:56.766 [2024-07-22 15:05:29.882215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.766 [2024-07-22 15:05:29.882224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:52:56.766 [2024-07-22 15:05:29.882238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.766 [2024-07-22 15:05:29.882247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:52:56.766 [2024-07-22 15:05:29.882265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.766 [2024-07-22 15:05:29.882275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:52:56.766 [2024-07-22 15:05:29.882288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.766 [2024-07-22 15:05:29.882297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:52:56.766 [2024-07-22 15:05:29.882311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.766 [2024-07-22 15:05:29.882320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:52:56.766 [2024-07-22 15:05:29.882334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.766 [2024-07-22 15:05:29.882343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:52:56.766 [2024-07-22 15:05:29.882357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.766 [2024-07-22 15:05:29.882366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:52:56.766 [2024-07-22 15:05:29.882380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.766 [2024-07-22 15:05:29.882389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:52:56.766 [2024-07-22 15:05:29.882402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.766 [2024-07-22 15:05:29.882411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:52:56.766 [2024-07-22 15:05:29.882425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.766 [2024-07-22 15:05:29.882433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:52:56.766 [2024-07-22 15:05:29.882447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.767 [2024-07-22 15:05:29.882456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:52:56.767 [2024-07-22 15:05:29.882469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.767 [2024-07-22 15:05:29.882478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:52:56.767 [2024-07-22 15:05:29.882492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.767 [2024-07-22 15:05:29.882501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:52:56.767 [2024-07-22 15:05:29.882515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.767 [2024-07-22 15:05:29.882523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.767 [2024-07-22 15:05:29.882537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.767 [2024-07-22 15:05:29.882550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:56.767 [2024-07-22 15:05:29.883041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.767 [2024-07-22 15:05:29.883059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:56.767 [2024-07-22 15:05:29.883076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.767 [2024-07-22 15:05:29.883085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:52:56.767 [2024-07-22 15:05:29.883100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.767 [2024-07-22 15:05:29.883109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:52:56.767 [2024-07-22 15:05:29.883124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.767 [2024-07-22 15:05:29.883133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:52:56.767 [2024-07-22 15:05:29.883146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.767 [2024-07-22 15:05:29.883155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:52:56.767 [2024-07-22 15:05:29.883169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.767 [2024-07-22 15:05:29.883178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:52:56.767 [2024-07-22 15:05:29.883193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.767 [2024-07-22 15:05:29.883202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:52:56.767 [2024-07-22 15:05:29.883216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.767 [2024-07-22 15:05:29.883225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:52:56.767 [2024-07-22 15:05:29.883239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:96376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.767 [2024-07-22 15:05:29.883248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:52:56.767 [2024-07-22 15:05:29.883262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.767 [2024-07-22 15:05:29.883270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:52:56.767 [2024-07-22 15:05:29.883285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.767 [2024-07-22 15:05:29.883293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:52:56.767 [2024-07-22 15:05:29.883307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.767 [2024-07-22 15:05:29.883323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:52:56.767 [2024-07-22 15:05:29.883337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.767 [2024-07-22 15:05:29.883346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:52:56.767 [2024-07-22 15:05:29.883360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.767 [2024-07-22 15:05:29.883369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:52:56.767 [2024-07-22 15:05:29.883383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.767 [2024-07-22 15:05:29.883391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:52:56.767 [2024-07-22 15:05:29.883406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.767 [2024-07-22 15:05:29.883415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:52:56.767 [2024-07-22 15:05:29.883429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.767 [2024-07-22 15:05:29.883438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:52:56.767 [2024-07-22 15:05:29.883452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.767 [2024-07-22 15:05:29.883461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:52:56.767 [2024-07-22 15:05:29.883475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.767 [2024-07-22 15:05:29.883484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:52:56.767 [2024-07-22 15:05:29.883498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.767 [2024-07-22 15:05:29.883507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:52:56.767 [2024-07-22 15:05:29.883520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.767 [2024-07-22 15:05:29.883529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:52:56.767 [2024-07-22 15:05:29.883543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:96480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.767 [2024-07-22 15:05:29.883552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:52:56.767 [2024-07-22 15:05:29.883567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.767 [2024-07-22 15:05:29.883575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:52:56.767 [2024-07-22 15:05:29.883589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.767 [2024-07-22 15:05:29.883598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:52:56.767 [2024-07-22 15:05:29.883616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:96504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.767 [2024-07-22 15:05:29.883625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:52:56.767 [2024-07-22 15:05:29.883638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.767 [2024-07-22 15:05:29.883647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:52:56.767 [2024-07-22 15:05:29.883661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.767 [2024-07-22 15:05:29.883680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:52:56.767 [2024-07-22 15:05:29.883694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.767 [2024-07-22 15:05:29.883703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:52:56.767 [2024-07-22 15:05:29.883716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.767 [2024-07-22 15:05:29.883725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:52:56.767 [2024-07-22 15:05:29.883738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.767 [2024-07-22 15:05:29.883747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:52:56.767 [2024-07-22 15:05:29.883760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.767 [2024-07-22 15:05:29.883769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:52:56.767 [2024-07-22 15:05:29.883782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.767 [2024-07-22 15:05:29.883791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:52:56.767 [2024-07-22 15:05:29.883805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.767 [2024-07-22 15:05:29.883813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:56.767 [2024-07-22 15:05:29.883827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.767 [2024-07-22 15:05:29.883836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:52:56.767 [2024-07-22 15:05:29.883849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.768 [2024-07-22 15:05:29.883858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:52:56.768 [2024-07-22 15:05:29.883872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.768 [2024-07-22 15:05:29.883892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:52:56.768 [2024-07-22 15:05:29.883913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.768 [2024-07-22 15:05:29.883922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:52:56.768 [2024-07-22 15:05:29.883935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.768 [2024-07-22 15:05:29.883943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:52:56.768 [2024-07-22 15:05:29.883956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.768 [2024-07-22 15:05:29.883964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:52:56.768 [2024-07-22 15:05:29.883976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.768 [2024-07-22 15:05:29.883985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:52:56.768 [2024-07-22 15:05:29.883998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.768 [2024-07-22 15:05:29.884006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:52:56.768 [2024-07-22 15:05:29.884019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.768 [2024-07-22 15:05:29.884027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:52:56.768 [2024-07-22 15:05:29.884039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.768 [2024-07-22 15:05:29.884047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:52:56.768 [2024-07-22 15:05:29.884060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.768 [2024-07-22 15:05:29.884068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:52:56.768 [2024-07-22 15:05:29.884081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.768 [2024-07-22 15:05:29.884089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:52:56.768 [2024-07-22 15:05:29.884102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.768 [2024-07-22 15:05:29.884110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:52:56.768 [2024-07-22 15:05:29.884123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.768 [2024-07-22 15:05:29.884131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:52:56.768 [2024-07-22 15:05:29.884144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.768 [2024-07-22 15:05:29.884152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:52:56.768 [2024-07-22 15:05:29.884165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.768 [2024-07-22 15:05:29.884176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:52:56.768 [2024-07-22 15:05:29.884189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.768 [2024-07-22 15:05:29.884197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:52:56.768 [2024-07-22 15:05:29.884210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.768 [2024-07-22 15:05:29.884218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:52:56.768 [2024-07-22 15:05:29.884231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.768 [2024-07-22 15:05:29.884239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:52:56.768 [2024-07-22 15:05:29.884254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.768 [2024-07-22 15:05:29.884262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:52:56.768 [2024-07-22 15:05:29.884275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.768 [2024-07-22 15:05:29.884283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:52:56.768 [2024-07-22 15:05:29.884295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.768 [2024-07-22 15:05:29.884303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:52:56.768 [2024-07-22 15:05:29.884316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.768 [2024-07-22 15:05:29.884324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:52:56.768 [2024-07-22 15:05:29.884337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.768 [2024-07-22 15:05:29.884345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:52:56.768 [2024-07-22 15:05:29.884358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.768 [2024-07-22 15:05:29.884366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:52:56.768 [2024-07-22 15:05:29.884379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:96320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.768 [2024-07-22 15:05:29.884387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:52:56.768 [2024-07-22 15:05:29.884400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.768 [2024-07-22 15:05:29.884408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:52:56.768 [2024-07-22 15:05:29.884421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.768 [2024-07-22 15:05:29.884433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:52:56.768 [2024-07-22 15:05:29.884446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.768 [2024-07-22 15:05:29.884454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:52:56.768 [2024-07-22 15:05:29.884467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.768 [2024-07-22 15:05:29.884475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:52:56.768 [2024-07-22 15:05:29.884488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.768 [2024-07-22 15:05:29.884496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:52:56.768 [2024-07-22 15:05:29.884985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.768 [2024-07-22 15:05:29.885004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:56.768 [2024-07-22 15:05:29.885021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.768 [2024-07-22 15:05:29.885030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:52:56.768 [2024-07-22 15:05:29.885043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.768 [2024-07-22 15:05:29.885052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:52:56.768 [2024-07-22 15:05:29.885065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.768 [2024-07-22 15:05:29.885073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:52:56.768 [2024-07-22 15:05:29.885088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.768 [2024-07-22 15:05:29.885097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:52:56.769 [2024-07-22 15:05:29.885110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.769 [2024-07-22 15:05:29.885118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:52:56.769 [2024-07-22 15:05:29.885131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.769 [2024-07-22 15:05:29.885140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:52:56.769 [2024-07-22 15:05:29.885153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.769 [2024-07-22 15:05:29.885161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:52:56.769 [2024-07-22 15:05:29.885174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.769 [2024-07-22 15:05:29.885182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:52:56.769 [2024-07-22 15:05:29.885201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.769 [2024-07-22 15:05:29.885210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:52:56.769 [2024-07-22 15:05:29.885223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.769 [2024-07-22 15:05:29.885232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:52:56.769 [2024-07-22 15:05:29.885244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.769 [2024-07-22 15:05:29.885253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:52:56.769 [2024-07-22 15:05:29.885266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.769 [2024-07-22 15:05:29.885274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:52:56.769 [2024-07-22 15:05:29.885287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.769 [2024-07-22 15:05:29.885295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:52:56.769 [2024-07-22 15:05:29.885308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.769 [2024-07-22 15:05:29.885316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:52:56.769 [2024-07-22 15:05:29.885330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.769 [2024-07-22 15:05:29.885338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:52:56.769 [2024-07-22 15:05:29.885353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.769 [2024-07-22 15:05:29.885361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:52:56.769 [2024-07-22 15:05:29.885374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.769 [2024-07-22 15:05:29.885382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:52:56.769 [2024-07-22 15:05:29.885395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.769 [2024-07-22 15:05:29.885403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:52:56.769 [2024-07-22 15:05:29.885417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.769 [2024-07-22 15:05:29.885425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:52:56.769 [2024-07-22 15:05:29.885440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.769 [2024-07-22 15:05:29.885448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:52:56.769 [2024-07-22 15:05:29.885465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.769 [2024-07-22 15:05:29.885473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:52:56.769 [2024-07-22 15:05:29.885486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.769 [2024-07-22 15:05:29.885495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:52:56.769 [2024-07-22 15:05:29.885508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.769 [2024-07-22 15:05:29.885516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:52:56.769 [2024-07-22 15:05:29.885529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.769 [2024-07-22 15:05:29.885537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:52:56.769 [2024-07-22 15:05:29.885550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.769 [2024-07-22 15:05:29.885558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:52:56.769 [2024-07-22 15:05:29.885571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.769 [2024-07-22 15:05:29.885579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:52:56.769 [2024-07-22 15:05:29.885592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.769 [2024-07-22 15:05:29.885601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:52:56.769 [2024-07-22 15:05:29.885614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.769 [2024-07-22 15:05:29.885623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:52:56.769 [2024-07-22 15:05:29.885636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.769 [2024-07-22 15:05:29.885644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:52:56.769 [2024-07-22 15:05:29.885657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.769 [2024-07-22 15:05:29.885665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:52:56.769 [2024-07-22 15:05:29.885687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.769 [2024-07-22 15:05:29.904832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:52:56.769 [2024-07-22 15:05:29.904896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.769 [2024-07-22 15:05:29.904912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:56.769 [2024-07-22 15:05:29.904932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.769 [2024-07-22 15:05:29.904961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:52:56.769 [2024-07-22 15:05:29.904980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.769 [2024-07-22 15:05:29.904992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:52:56.769 [2024-07-22 15:05:29.905011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.769 [2024-07-22 15:05:29.905023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:52:56.769 [2024-07-22 15:05:29.905043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.769 [2024-07-22 15:05:29.905054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:52:56.769 [2024-07-22 15:05:29.905073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.769 [2024-07-22 15:05:29.905085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:52:56.769 [2024-07-22 15:05:29.905104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.769 [2024-07-22 15:05:29.905116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:52:56.769 [2024-07-22 15:05:29.905134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.770 [2024-07-22 15:05:29.905146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:52:56.770 [2024-07-22 15:05:29.905165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.770 [2024-07-22 15:05:29.905177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:52:56.770 [2024-07-22 15:05:29.905195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.770 [2024-07-22 15:05:29.905207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:52:56.770 [2024-07-22 15:05:29.905226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.770 [2024-07-22 15:05:29.905238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:52:56.770 [2024-07-22 15:05:29.905256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.770 [2024-07-22 15:05:29.905268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:52:56.770 [2024-07-22 15:05:29.905286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.770 [2024-07-22 15:05:29.905298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:52:56.770 [2024-07-22 15:05:29.905317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.770 [2024-07-22 15:05:29.905334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:52:56.770 [2024-07-22 15:05:29.905353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.770 [2024-07-22 15:05:29.905365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:52:56.770 [2024-07-22 15:05:29.905384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.770 [2024-07-22 15:05:29.905396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:52:56.770 [2024-07-22 15:05:29.905415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.770 [2024-07-22 15:05:29.905427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:52:56.770 [2024-07-22 15:05:29.905446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.770 [2024-07-22 15:05:29.905458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:52:56.770 [2024-07-22 15:05:29.905477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.770 [2024-07-22 15:05:29.905488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:52:56.770 [2024-07-22 15:05:29.905508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.770 [2024-07-22 15:05:29.905520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:52:56.770 [2024-07-22 15:05:29.905539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.770 [2024-07-22 15:05:29.905551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:52:56.770 [2024-07-22 15:05:29.905569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.770 [2024-07-22 15:05:29.905581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:52:56.770 [2024-07-22 15:05:29.905600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.770 [2024-07-22 15:05:29.905611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:52:56.770 [2024-07-22 15:05:29.905630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.770 [2024-07-22 15:05:29.905642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:52:56.770 [2024-07-22 15:05:29.905660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.770 [2024-07-22 15:05:29.905688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:52:56.770 [2024-07-22 15:05:29.905708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.770 [2024-07-22 15:05:29.905720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:52:56.770 [2024-07-22 15:05:29.905744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.770 [2024-07-22 15:05:29.905757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:52:56.770 [2024-07-22 15:05:29.905775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.770 [2024-07-22 15:05:29.905787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:52:56.770 [2024-07-22 15:05:29.905806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.770 [2024-07-22 15:05:29.905818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:52:56.770 [2024-07-22 15:05:29.905837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.770 [2024-07-22 15:05:29.905849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:52:56.770 [2024-07-22 15:05:29.905868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.770 [2024-07-22 15:05:29.905880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.770 [2024-07-22 15:05:29.906703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.770 [2024-07-22 15:05:29.906728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:56.770 [2024-07-22 15:05:29.906752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.770 [2024-07-22 15:05:29.906765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:56.770 [2024-07-22 15:05:29.906785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.770 [2024-07-22 15:05:29.906797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:52:56.770 [2024-07-22 15:05:29.906815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.770 [2024-07-22 15:05:29.906827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:52:56.770 [2024-07-22 15:05:29.906846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.770 [2024-07-22 15:05:29.906858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:52:56.770 [2024-07-22 15:05:29.906877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.770 [2024-07-22 15:05:29.906888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:52:56.770 [2024-07-22 15:05:29.906907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.770 [2024-07-22 15:05:29.906919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:52:56.770 [2024-07-22 15:05:29.906946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.770 [2024-07-22 15:05:29.906959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:52:56.770 [2024-07-22 15:05:29.906978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.770 [2024-07-22 15:05:29.906990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:52:56.770 [2024-07-22 15:05:29.907009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.770 [2024-07-22 15:05:29.907021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:52:56.770 [2024-07-22 15:05:29.907040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.770 [2024-07-22 15:05:29.907052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:52:56.770 [2024-07-22 15:05:29.907070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.770 [2024-07-22 15:05:29.907082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:52:56.770 [2024-07-22 15:05:29.907101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.770 [2024-07-22 15:05:29.907113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:52:56.770 [2024-07-22 15:05:29.907132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.770 [2024-07-22 15:05:29.907144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:52:56.770 [2024-07-22 15:05:29.907163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.770 [2024-07-22 15:05:29.907175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:52:56.770 [2024-07-22 15:05:29.907193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.771 [2024-07-22 15:05:29.907205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:52:56.771 [2024-07-22 15:05:29.907225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.771 [2024-07-22 15:05:29.907237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:52:56.771 [2024-07-22 15:05:29.907256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.771 [2024-07-22 15:05:29.907268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:52:56.771 [2024-07-22 15:05:29.907287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.771 [2024-07-22 15:05:29.907299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:52:56.771 [2024-07-22 15:05:29.907318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:96456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.771 [2024-07-22 15:05:29.907335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:52:56.771 [2024-07-22 15:05:29.907354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.771 [2024-07-22 15:05:29.907367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:52:56.771 [2024-07-22 15:05:29.907386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.771 [2024-07-22 15:05:29.907397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:52:56.771 [2024-07-22 15:05:29.907416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.771 [2024-07-22 15:05:29.907428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:52:56.771 [2024-07-22 15:05:29.907447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.771 [2024-07-22 15:05:29.907459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:52:56.771 [2024-07-22 15:05:29.907477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.771 [2024-07-22 15:05:29.907489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:52:56.771 [2024-07-22 15:05:29.907508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.771 [2024-07-22 15:05:29.907520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:52:56.771 [2024-07-22 15:05:29.907539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.771 [2024-07-22 15:05:29.907551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:52:56.771 [2024-07-22 15:05:29.907570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.771 [2024-07-22 15:05:29.907581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:52:56.771 [2024-07-22 15:05:29.907600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.771 [2024-07-22 15:05:29.907612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:52:56.771 [2024-07-22 15:05:29.907631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.771 [2024-07-22 15:05:29.907643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:52:56.771 [2024-07-22 15:05:29.907661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.771 [2024-07-22 15:05:29.907685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:52:56.771 [2024-07-22 15:05:29.907704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.771 [2024-07-22 15:05:29.907722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:52:56.771 [2024-07-22 15:05:29.907741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.771 [2024-07-22 15:05:29.907752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:52:56.771 [2024-07-22 15:05:29.907771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.771 [2024-07-22 15:05:29.907783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:56.771 [2024-07-22 15:05:29.907802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.771 [2024-07-22 15:05:29.907814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:52:56.771 [2024-07-22 15:05:29.907833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.771 [2024-07-22 15:05:29.907845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:52:56.771 [2024-07-22 15:05:29.907864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.771 [2024-07-22 15:05:29.907876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:52:56.771 [2024-07-22 15:05:29.907894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.771 [2024-07-22 15:05:29.907906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:52:56.771 [2024-07-22 15:05:29.907925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.771 [2024-07-22 15:05:29.907936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:52:56.771 [2024-07-22 15:05:29.907955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.771 [2024-07-22 15:05:29.907967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:52:56.771 [2024-07-22 15:05:29.907986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.771 [2024-07-22 15:05:29.907998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:52:56.771 [2024-07-22 15:05:29.908017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.771 [2024-07-22 15:05:29.908029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:52:56.771 [2024-07-22 15:05:29.908048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.771 [2024-07-22 15:05:29.908060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:52:56.771 [2024-07-22 15:05:29.908079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.771 [2024-07-22 15:05:29.908090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:52:56.771 [2024-07-22 15:05:29.908114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.771 [2024-07-22 15:05:29.908126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:52:56.771 [2024-07-22 15:05:29.908145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.771 [2024-07-22 15:05:29.908157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:52:56.771 [2024-07-22 15:05:29.908175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.771 [2024-07-22 15:05:29.908187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:52:56.771 [2024-07-22 15:05:29.908206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.771 [2024-07-22 15:05:29.908217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:52:56.771 [2024-07-22 15:05:29.908236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.771 [2024-07-22 15:05:29.908248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:52:56.771 [2024-07-22 15:05:29.908270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.771 [2024-07-22 15:05:29.908282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:52:56.771 [2024-07-22 15:05:29.908301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.771 [2024-07-22 15:05:29.908312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:52:56.771 [2024-07-22 15:05:29.908331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.771 [2024-07-22 15:05:29.908343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:52:56.771 [2024-07-22 15:05:29.908362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.771 [2024-07-22 15:05:29.908373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:52:56.771 [2024-07-22 15:05:29.908392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.771 [2024-07-22 15:05:29.908403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:52:56.771 [2024-07-22 15:05:29.908422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.772 [2024-07-22 15:05:29.908434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:52:56.772 [2024-07-22 15:05:29.908452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.772 [2024-07-22 15:05:29.908464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:52:56.772 [2024-07-22 15:05:29.908485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.772 [2024-07-22 15:05:29.908498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:52:56.772 [2024-07-22 15:05:29.908518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.772 [2024-07-22 15:05:29.908530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:52:56.772 [2024-07-22 15:05:29.908549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.772 [2024-07-22 15:05:29.908560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:52:56.772 [2024-07-22 15:05:29.908579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:96320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.772 [2024-07-22 15:05:29.908591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:52:56.772 [2024-07-22 15:05:29.908623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.772 [2024-07-22 15:05:29.908635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:52:56.772 [2024-07-22 15:05:29.908654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.772 [2024-07-22 15:05:29.908676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:52:56.772 [2024-07-22 15:05:29.908697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.772 [2024-07-22 15:05:29.908708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:52:56.772 [2024-07-22 15:05:29.908727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.772 [2024-07-22 15:05:29.908739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:52:56.772 [2024-07-22 15:05:29.908758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.772 [2024-07-22 15:05:29.908769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:52:56.772 [2024-07-22 15:05:29.908790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.772 [2024-07-22 15:05:29.908802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:56.772 [2024-07-22 15:05:29.908821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.772 [2024-07-22 15:05:29.908832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:52:56.772 [2024-07-22 15:05:29.908851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.772 [2024-07-22 15:05:29.908863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:52:56.772 [2024-07-22 15:05:29.908882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.772 [2024-07-22 15:05:29.908899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:52:56.772 [2024-07-22 15:05:29.908918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.772 [2024-07-22 15:05:29.908930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:52:56.772 [2024-07-22 15:05:29.908949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.772 [2024-07-22 15:05:29.908961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:52:56.772 [2024-07-22 15:05:29.908980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.772 [2024-07-22 15:05:29.908992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:52:56.772 [2024-07-22 15:05:29.909865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.772 [2024-07-22 15:05:29.909894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:52:56.772 [2024-07-22 15:05:29.909924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.772 [2024-07-22 15:05:29.909941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:52:56.772 [2024-07-22 15:05:29.909967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.772 [2024-07-22 15:05:29.909983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:52:56.772 [2024-07-22 15:05:29.910009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.772 [2024-07-22 15:05:29.910026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:52:56.772 [2024-07-22 15:05:29.910052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.772 [2024-07-22 15:05:29.910068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:52:56.772 [2024-07-22 15:05:29.910094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.772 [2024-07-22 15:05:29.910110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:52:56.772 [2024-07-22 15:05:29.910136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.772 [2024-07-22 15:05:29.910152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:52:56.772 [2024-07-22 15:05:29.910178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.772 [2024-07-22 15:05:29.910195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:52:56.772 [2024-07-22 15:05:29.910220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.772 [2024-07-22 15:05:29.910248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:52:56.772 [2024-07-22 15:05:29.910275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.772 [2024-07-22 15:05:29.910292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:52:56.772 [2024-07-22 15:05:29.910317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.772 [2024-07-22 15:05:29.910334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:52:56.772 [2024-07-22 15:05:29.910360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.772 [2024-07-22 15:05:29.910377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:52:56.772 [2024-07-22 15:05:29.910402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.772 [2024-07-22 15:05:29.910419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:52:56.772 [2024-07-22 15:05:29.910444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.772 [2024-07-22 15:05:29.910461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:52:56.772 [2024-07-22 15:05:29.910486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.772 [2024-07-22 15:05:29.910503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:52:56.772 [2024-07-22 15:05:29.910529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.772 [2024-07-22 15:05:29.910545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:52:56.772 [2024-07-22 15:05:29.910571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.772 [2024-07-22 15:05:29.910587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:52:56.772 [2024-07-22 15:05:29.910613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.772 [2024-07-22 15:05:29.910629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:52:56.772 [2024-07-22 15:05:29.910654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.772 [2024-07-22 15:05:29.910684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:52:56.772 [2024-07-22 15:05:29.910711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.772 [2024-07-22 15:05:29.910727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:52:56.772 [2024-07-22 15:05:29.910753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.772 [2024-07-22 15:05:29.910769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:52:56.772 [2024-07-22 15:05:29.910802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.773 [2024-07-22 15:05:29.910819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:52:56.773 [2024-07-22 15:05:29.910845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.773 [2024-07-22 15:05:29.910862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:52:56.773 [2024-07-22 15:05:29.910887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.773 [2024-07-22 15:05:29.910904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:52:56.773 [2024-07-22 15:05:29.910930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.773 [2024-07-22 15:05:29.910946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:52:56.773 [2024-07-22 15:05:29.910972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.773 [2024-07-22 15:05:29.910988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:56.773 [2024-07-22 15:05:29.911014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.773 [2024-07-22 15:05:29.911030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:52:56.773 [2024-07-22 15:05:29.911056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.773 [2024-07-22 15:05:29.911072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:52:56.773 [2024-07-22 15:05:29.911098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.773 [2024-07-22 15:05:29.911114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:52:56.773 [2024-07-22 15:05:29.911140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.773 [2024-07-22 15:05:29.911156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:52:56.773 [2024-07-22 15:05:29.911182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.773 [2024-07-22 15:05:29.911198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:52:56.773 [2024-07-22 15:05:29.911226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.773 [2024-07-22 15:05:29.911242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:52:56.773 [2024-07-22 15:05:29.911268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.773 [2024-07-22 15:05:29.911284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:52:56.773 [2024-07-22 15:05:29.911316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.773 [2024-07-22 15:05:29.911333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:52:56.773 [2024-07-22 15:05:29.911359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.773 [2024-07-22 15:05:29.911375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:52:56.773 [2024-07-22 15:05:29.911401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.773 [2024-07-22 15:05:29.911417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:52:56.773 [2024-07-22 15:05:29.911442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.773 [2024-07-22 15:05:29.911459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:52:56.773 [2024-07-22 15:05:29.911484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.773 [2024-07-22 15:05:29.911501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:52:56.773 [2024-07-22 15:05:29.911526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.774 [2024-07-22 15:05:29.911542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:52:56.774 [2024-07-22 15:05:29.911568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.774 [2024-07-22 15:05:29.911584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:52:56.774 [2024-07-22 15:05:29.911610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.774 [2024-07-22 15:05:29.911627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:52:56.774 [2024-07-22 15:05:29.911653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.774 [2024-07-22 15:05:29.911681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:52:56.774 [2024-07-22 15:05:29.911707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.774 [2024-07-22 15:05:29.911724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:52:56.774 [2024-07-22 15:05:29.911749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.774 [2024-07-22 15:05:29.911766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:52:56.774 [2024-07-22 15:05:29.911792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.774 [2024-07-22 15:05:29.911808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:52:56.774 [2024-07-22 15:05:29.911834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.774 [2024-07-22 15:05:29.911857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:52:56.774 [2024-07-22 15:05:29.911883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.774 [2024-07-22 15:05:29.911899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:52:56.774 [2024-07-22 15:05:29.911926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.774 [2024-07-22 15:05:29.911942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:52:56.774 [2024-07-22 15:05:29.911968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.774 [2024-07-22 15:05:29.911984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:52:56.774 [2024-07-22 15:05:29.912009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.774 [2024-07-22 15:05:29.912026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:52:56.774 [2024-07-22 15:05:29.912051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.774 [2024-07-22 15:05:29.912068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:52:56.774 [2024-07-22 15:05:29.912094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.774 [2024-07-22 15:05:29.912110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:52:56.774 [2024-07-22 15:05:29.912135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.774 [2024-07-22 15:05:29.912151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:52:56.774 [2024-07-22 15:05:29.912178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.774 [2024-07-22 15:05:29.912194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:52:56.774 [2024-07-22 15:05:29.912221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.774 [2024-07-22 15:05:29.912237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:52:56.774 [2024-07-22 15:05:29.913145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.774 [2024-07-22 15:05:29.913174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.774 [2024-07-22 15:05:29.913203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.774 [2024-07-22 15:05:29.913220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:56.774 [2024-07-22 15:05:29.913246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.774 [2024-07-22 15:05:29.913274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:56.774 [2024-07-22 15:05:29.913301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.774 [2024-07-22 15:05:29.913317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:52:56.774 [2024-07-22 15:05:29.913343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.774 [2024-07-22 15:05:29.913359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:52:56.774 [2024-07-22 15:05:29.913385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.774 [2024-07-22 15:05:29.913401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:52:56.774 [2024-07-22 15:05:29.913427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.774 [2024-07-22 15:05:29.913443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:52:56.774 [2024-07-22 15:05:29.913469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.774 [2024-07-22 15:05:29.913485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:52:56.774 [2024-07-22 15:05:29.913511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.774 [2024-07-22 15:05:29.913527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:52:56.774 [2024-07-22 15:05:29.913553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.774 [2024-07-22 15:05:29.913569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:52:56.774 [2024-07-22 15:05:29.913595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.775 [2024-07-22 15:05:29.913612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:52:56.775 [2024-07-22 15:05:29.913637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.775 [2024-07-22 15:05:29.913654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:52:56.775 [2024-07-22 15:05:29.913698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.775 [2024-07-22 15:05:29.913715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:52:56.775 [2024-07-22 15:05:29.913742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.775 [2024-07-22 15:05:29.913758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:52:56.775 [2024-07-22 15:05:29.913784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.775 [2024-07-22 15:05:29.913800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:52:56.775 [2024-07-22 15:05:29.913834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.775 [2024-07-22 15:05:29.913850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:52:56.775 [2024-07-22 15:05:29.913877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.775 [2024-07-22 15:05:29.913893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:52:56.775 [2024-07-22 15:05:29.913919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.775 [2024-07-22 15:05:29.913935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:52:56.775 [2024-07-22 15:05:29.913962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.775 [2024-07-22 15:05:29.913979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:52:56.775 [2024-07-22 15:05:29.914004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:96448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.775 [2024-07-22 15:05:29.914021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:52:56.775 [2024-07-22 15:05:29.914046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:96456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.775 [2024-07-22 15:05:29.914063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:52:56.775 [2024-07-22 15:05:29.914088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.775 [2024-07-22 15:05:29.914105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:52:56.775 [2024-07-22 15:05:29.914130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.775 [2024-07-22 15:05:29.914147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:52:56.775 [2024-07-22 15:05:29.914173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.775 [2024-07-22 15:05:29.914189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:52:56.775 [2024-07-22 15:05:29.914215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.775 [2024-07-22 15:05:29.914231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:52:56.775 [2024-07-22 15:05:29.914257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.775 [2024-07-22 15:05:29.914273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:52:56.775 [2024-07-22 15:05:29.914299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.775 [2024-07-22 15:05:29.914316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:52:56.775 [2024-07-22 15:05:29.914351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:96512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.775 [2024-07-22 15:05:29.914367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:52:56.775 [2024-07-22 15:05:29.914394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.775 [2024-07-22 15:05:29.914410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:52:56.775 [2024-07-22 15:05:29.914436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:96528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.775 [2024-07-22 15:05:29.914453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:52:56.775 [2024-07-22 15:05:29.914479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.775 [2024-07-22 15:05:29.914495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:52:56.775 [2024-07-22 15:05:29.914521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.775 [2024-07-22 15:05:29.914537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:52:56.775 [2024-07-22 15:05:29.914563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.775 [2024-07-22 15:05:29.914580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:52:56.775 [2024-07-22 15:05:29.914605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.775 [2024-07-22 15:05:29.914621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:52:56.775 [2024-07-22 15:05:29.914647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.775 [2024-07-22 15:05:29.914664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:56.775 [2024-07-22 15:05:29.914702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.775 [2024-07-22 15:05:29.914719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:52:56.775 [2024-07-22 15:05:29.914745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.775 [2024-07-22 15:05:29.914761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:52:56.775 [2024-07-22 15:05:29.914787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.775 [2024-07-22 15:05:29.914803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:52:56.775 [2024-07-22 15:05:29.914829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.775 [2024-07-22 15:05:29.914845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:52:56.775 [2024-07-22 15:05:29.914870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.775 [2024-07-22 15:05:29.914894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:52:56.775 [2024-07-22 15:05:29.914920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.775 [2024-07-22 15:05:29.914935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:52:56.775 [2024-07-22 15:05:29.914961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.775 [2024-07-22 15:05:29.914977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:52:56.775 [2024-07-22 15:05:29.915003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.775 [2024-07-22 15:05:29.915020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:52:56.775 [2024-07-22 15:05:29.915045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.775 [2024-07-22 15:05:29.915062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:52:56.775 [2024-07-22 15:05:29.915088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.775 [2024-07-22 15:05:29.915103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:52:56.775 [2024-07-22 15:05:29.915129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.775 [2024-07-22 15:05:29.915146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:52:56.775 [2024-07-22 15:05:29.915171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.775 [2024-07-22 15:05:29.915187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:52:56.775 [2024-07-22 15:05:29.915213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.775 [2024-07-22 15:05:29.915229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:52:56.775 [2024-07-22 15:05:29.915255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.775 [2024-07-22 15:05:29.915271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:52:56.776 [2024-07-22 15:05:29.915297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.776 [2024-07-22 15:05:29.915314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:52:56.776 [2024-07-22 15:05:29.915340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.776 [2024-07-22 15:05:29.915356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:52:56.776 [2024-07-22 15:05:29.915382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.776 [2024-07-22 15:05:29.915404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:52:56.776 [2024-07-22 15:05:29.915431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.776 [2024-07-22 15:05:29.915447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:52:56.776 [2024-07-22 15:05:29.915472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.776 [2024-07-22 15:05:29.915489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:52:56.776 [2024-07-22 15:05:29.915515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.776 [2024-07-22 15:05:29.915531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:52:56.776 [2024-07-22 15:05:29.915556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.776 [2024-07-22 15:05:29.915572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:52:56.776 [2024-07-22 15:05:29.915598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.776 [2024-07-22 15:05:29.915614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:52:56.776 [2024-07-22 15:05:29.915640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.776 [2024-07-22 15:05:29.915657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:52:56.776 [2024-07-22 15:05:29.915694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.776 [2024-07-22 15:05:29.915711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:52:56.776 [2024-07-22 15:05:29.915737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.776 [2024-07-22 15:05:29.915753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:52:56.776 [2024-07-22 15:05:29.915779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:96320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.776 [2024-07-22 15:05:29.915796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:52:56.776 [2024-07-22 15:05:29.915822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.776 [2024-07-22 15:05:29.915838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:52:56.776 [2024-07-22 15:05:29.915863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.776 [2024-07-22 15:05:29.915880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:52:56.776 [2024-07-22 15:05:29.915906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.776 [2024-07-22 15:05:29.915922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:52:56.776 [2024-07-22 15:05:29.915954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.776 [2024-07-22 15:05:29.915971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:52:56.776 [2024-07-22 15:05:29.915996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.776 [2024-07-22 15:05:29.916013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:52:56.776 [2024-07-22 15:05:29.916038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.776 [2024-07-22 15:05:29.916057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:56.776 [2024-07-22 15:05:29.916083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.776 [2024-07-22 15:05:29.916099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:52:56.776 [2024-07-22 15:05:29.916125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.776 [2024-07-22 15:05:29.916141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:52:56.776 [2024-07-22 15:05:29.916167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.776 [2024-07-22 15:05:29.916183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:52:56.776 [2024-07-22 15:05:29.916211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.776 [2024-07-22 15:05:29.916228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:52:56.776 [2024-07-22 15:05:29.916254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.776 [2024-07-22 15:05:29.916270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:52:56.776 [2024-07-22 15:05:29.917350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.776 [2024-07-22 15:05:29.917381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:52:56.776 [2024-07-22 15:05:29.917412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.776 [2024-07-22 15:05:29.917429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:52:56.776 [2024-07-22 15:05:29.917455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.776 [2024-07-22 15:05:29.917472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:52:56.776 [2024-07-22 15:05:29.917498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.776 [2024-07-22 15:05:29.917514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:52:56.776 [2024-07-22 15:05:29.917551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.776 [2024-07-22 15:05:29.917567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:52:56.776 [2024-07-22 15:05:29.917593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.776 [2024-07-22 15:05:29.917610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:52:56.776 [2024-07-22 15:05:29.917635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.776 [2024-07-22 15:05:29.917652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:52:56.776 [2024-07-22 15:05:29.917693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.776 [2024-07-22 15:05:29.917711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:52:56.776 [2024-07-22 15:05:29.917737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.776 [2024-07-22 15:05:29.917753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:52:56.776 [2024-07-22 15:05:29.917779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.776 [2024-07-22 15:05:29.917796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:52:56.776 [2024-07-22 15:05:29.917821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.776 [2024-07-22 15:05:29.917840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:52:56.776 [2024-07-22 15:05:29.917866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.776 [2024-07-22 15:05:29.917883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:52:56.776 [2024-07-22 15:05:29.917908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.776 [2024-07-22 15:05:29.917924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:52:56.776 [2024-07-22 15:05:29.917950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.776 [2024-07-22 15:05:29.917966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:52:56.776 [2024-07-22 15:05:29.917993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.776 [2024-07-22 15:05:29.918010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:52:56.776 [2024-07-22 15:05:29.918035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.776 [2024-07-22 15:05:29.918051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:52:56.777 [2024-07-22 15:05:29.918077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.777 [2024-07-22 15:05:29.918100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:52:56.777 [2024-07-22 15:05:29.918126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.777 [2024-07-22 15:05:29.918143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:52:56.777 [2024-07-22 15:05:29.918168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.777 [2024-07-22 15:05:29.918184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:52:56.777 [2024-07-22 15:05:29.918210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.777 [2024-07-22 15:05:29.918226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:52:56.777 [2024-07-22 15:05:29.918252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.777 [2024-07-22 15:05:29.918268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:52:56.777 [2024-07-22 15:05:29.918293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.777 [2024-07-22 15:05:29.918310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:52:56.777 [2024-07-22 15:05:29.918335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.777 [2024-07-22 15:05:29.918352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:52:56.777 [2024-07-22 15:05:29.918377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.777 [2024-07-22 15:05:29.918393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:52:56.777 [2024-07-22 15:05:29.918418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.777 [2024-07-22 15:05:29.918435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:52:56.777 [2024-07-22 15:05:29.918460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.777 [2024-07-22 15:05:29.918476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:52:56.777 [2024-07-22 15:05:29.918502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.777 [2024-07-22 15:05:29.918519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:56.777 [2024-07-22 15:05:29.918544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.777 [2024-07-22 15:05:29.918561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:52:56.777 [2024-07-22 15:05:29.918587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.777 [2024-07-22 15:05:29.918610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:52:56.777 [2024-07-22 15:05:29.918635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.777 [2024-07-22 15:05:29.918652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:52:56.777 [2024-07-22 15:05:29.918690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.777 [2024-07-22 15:05:29.918708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:52:56.777 [2024-07-22 15:05:29.918733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.777 [2024-07-22 15:05:29.918749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:52:56.777 [2024-07-22 15:05:29.918775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.777 [2024-07-22 15:05:29.918791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:52:56.777 [2024-07-22 15:05:29.918817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.777 [2024-07-22 15:05:29.918833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:52:56.777 [2024-07-22 15:05:29.918859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.777 [2024-07-22 15:05:29.918875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:52:56.777 [2024-07-22 15:05:29.918900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.777 [2024-07-22 15:05:29.918917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:52:56.777 [2024-07-22 15:05:29.918942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.777 [2024-07-22 15:05:29.918958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:52:56.777 [2024-07-22 15:05:29.918984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.777 [2024-07-22 15:05:29.919001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:52:56.777 [2024-07-22 15:05:29.919026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.777 [2024-07-22 15:05:29.919042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:52:56.777 [2024-07-22 15:05:29.919068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.777 [2024-07-22 15:05:29.919084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:52:56.777 [2024-07-22 15:05:29.919109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.777 [2024-07-22 15:05:29.919126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:52:56.777 [2024-07-22 15:05:29.919159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.777 [2024-07-22 15:05:29.919176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:52:56.777 [2024-07-22 15:05:29.919201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.777 [2024-07-22 15:05:29.919218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:52:56.777 [2024-07-22 15:05:29.919244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.777 [2024-07-22 15:05:29.919260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:52:56.777 [2024-07-22 15:05:29.919285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.777 [2024-07-22 15:05:29.919301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:52:56.777 [2024-07-22 15:05:29.919327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.777 [2024-07-22 15:05:29.919343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:52:56.777 [2024-07-22 15:05:29.919368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.777 [2024-07-22 15:05:29.919384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:52:56.777 [2024-07-22 15:05:29.919410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.777 [2024-07-22 15:05:29.919427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:52:56.777 [2024-07-22 15:05:29.919452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.777 [2024-07-22 15:05:29.919469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:52:56.777 [2024-07-22 15:05:29.919495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.777 [2024-07-22 15:05:29.919511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:52:56.777 [2024-07-22 15:05:29.919537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.777 [2024-07-22 15:05:29.919553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:52:56.777 [2024-07-22 15:05:29.919579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.777 [2024-07-22 15:05:29.919595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:52:56.777 [2024-07-22 15:05:29.919630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.777 [2024-07-22 15:05:29.919640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:52:56.777 [2024-07-22 15:05:29.919661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.777 [2024-07-22 15:05:29.919672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:52:56.777 [2024-07-22 15:05:29.919696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.777 [2024-07-22 15:05:29.919707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:52:56.778 [2024-07-22 15:05:29.920268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.778 [2024-07-22 15:05:29.920286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:52:56.778 [2024-07-22 15:05:29.920305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.778 [2024-07-22 15:05:29.920317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.778 [2024-07-22 15:05:29.920334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.778 [2024-07-22 15:05:29.920344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:56.778 [2024-07-22 15:05:29.920361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.778 [2024-07-22 15:05:29.920372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:56.778 [2024-07-22 15:05:29.920388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.778 [2024-07-22 15:05:29.920399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:52:56.778 [2024-07-22 15:05:29.920416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.778 [2024-07-22 15:05:29.920426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:52:56.778 [2024-07-22 15:05:29.920443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.778 [2024-07-22 15:05:29.920453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:52:56.778 [2024-07-22 15:05:29.920470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.778 [2024-07-22 15:05:29.920481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:52:56.778 [2024-07-22 15:05:29.920497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.778 [2024-07-22 15:05:29.920507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:52:56.778 [2024-07-22 15:05:29.920524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.778 [2024-07-22 15:05:29.920535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:52:56.778 [2024-07-22 15:05:29.920552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.778 [2024-07-22 15:05:29.920570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:52:56.778 [2024-07-22 15:05:29.920589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:96376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.778 [2024-07-22 15:05:29.920610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:52:56.778 [2024-07-22 15:05:29.920627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.778 [2024-07-22 15:05:29.920638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:52:56.778 [2024-07-22 15:05:29.920654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.778 [2024-07-22 15:05:29.920665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:52:56.778 [2024-07-22 15:05:29.920693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.778 [2024-07-22 15:05:29.920704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:52:56.778 [2024-07-22 15:05:29.920720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.778 [2024-07-22 15:05:29.920730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:52:56.778 [2024-07-22 15:05:29.920747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.778 [2024-07-22 15:05:29.920758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:52:56.778 [2024-07-22 15:05:29.920775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.778 [2024-07-22 15:05:29.920786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:52:56.778 [2024-07-22 15:05:29.920803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.778 [2024-07-22 15:05:29.920813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:52:56.778 [2024-07-22 15:05:29.920830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.778 [2024-07-22 15:05:29.920841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:52:56.778 [2024-07-22 15:05:29.920858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:96448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.778 [2024-07-22 15:05:29.920868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:52:56.778 [2024-07-22 15:05:29.920885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.778 [2024-07-22 15:05:29.920895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:52:56.778 [2024-07-22 15:05:29.920912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.778 [2024-07-22 15:05:29.920928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:52:56.778 [2024-07-22 15:05:29.920944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:96472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.778 [2024-07-22 15:05:29.920955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:52:56.778 [2024-07-22 15:05:29.920971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.778 [2024-07-22 15:05:29.920982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:52:56.778 [2024-07-22 15:05:29.920999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.778 [2024-07-22 15:05:29.921009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:52:56.778 [2024-07-22 15:05:29.921025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.778 [2024-07-22 15:05:29.921036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:52:56.778 [2024-07-22 15:05:29.921054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.778 [2024-07-22 15:05:29.921064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:52:56.778 [2024-07-22 15:05:29.921081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.778 [2024-07-22 15:05:29.921091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:52:56.778 [2024-07-22 15:05:29.921108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.778 [2024-07-22 15:05:29.921119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:52:56.778 [2024-07-22 15:05:29.921135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.778 [2024-07-22 15:05:29.921146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:52:56.778 [2024-07-22 15:05:29.921162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.778 [2024-07-22 15:05:29.921173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:52:56.778 [2024-07-22 15:05:29.921189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.778 [2024-07-22 15:05:29.921200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:52:56.778 [2024-07-22 15:05:29.921217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.778 [2024-07-22 15:05:29.921227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:52:56.779 [2024-07-22 15:05:29.921244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.779 [2024-07-22 15:05:29.921259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:52:56.779 [2024-07-22 15:05:29.921277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.779 [2024-07-22 15:05:29.921288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:56.779 [2024-07-22 15:05:29.921305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.779 [2024-07-22 15:05:29.921316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:52:56.779 [2024-07-22 15:05:29.921332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.779 [2024-07-22 15:05:29.921343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:52:56.779 [2024-07-22 15:05:29.921359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.779 [2024-07-22 15:05:29.921370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:52:56.779 [2024-07-22 15:05:29.921386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.779 [2024-07-22 15:05:29.921397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:52:56.779 [2024-07-22 15:05:29.921414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.779 [2024-07-22 15:05:29.921424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:52:56.779 [2024-07-22 15:05:29.921441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.779 [2024-07-22 15:05:29.921451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:52:56.779 [2024-07-22 15:05:29.921468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.779 [2024-07-22 15:05:29.921478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:52:56.779 [2024-07-22 15:05:29.921495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.779 [2024-07-22 15:05:29.921505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:52:56.779 [2024-07-22 15:05:29.921522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.779 [2024-07-22 15:05:29.921532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:52:56.779 [2024-07-22 15:05:29.921549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.779 [2024-07-22 15:05:29.921559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:52:56.779 [2024-07-22 15:05:29.926181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.779 [2024-07-22 15:05:29.926197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:52:56.779 [2024-07-22 15:05:29.926221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.779 [2024-07-22 15:05:29.926232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:52:56.779 [2024-07-22 15:05:29.926248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.779 [2024-07-22 15:05:29.926259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:52:56.779 [2024-07-22 15:05:29.926276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.779 [2024-07-22 15:05:29.926286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:52:56.779 [2024-07-22 15:05:29.926302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.779 [2024-07-22 15:05:29.926313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:52:56.779 [2024-07-22 15:05:29.926330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.779 [2024-07-22 15:05:29.926340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:52:56.779 [2024-07-22 15:05:29.926357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.779 [2024-07-22 15:05:29.926368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:52:56.779 [2024-07-22 15:05:29.926384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.779 [2024-07-22 15:05:29.926395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:52:56.779 [2024-07-22 15:05:29.926411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.779 [2024-07-22 15:05:29.926422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:52:56.779 [2024-07-22 15:05:29.926438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.779 [2024-07-22 15:05:29.926449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:52:56.779 [2024-07-22 15:05:29.926465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.779 [2024-07-22 15:05:29.926476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:52:56.779 [2024-07-22 15:05:29.926492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.779 [2024-07-22 15:05:29.926503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:52:56.779 [2024-07-22 15:05:29.926520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.779 [2024-07-22 15:05:29.926530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:52:56.779 [2024-07-22 15:05:29.926547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.779 [2024-07-22 15:05:29.926562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:52:56.779 [2024-07-22 15:05:29.926579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.779 [2024-07-22 15:05:29.926589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:52:56.779 [2024-07-22 15:05:29.926606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.779 [2024-07-22 15:05:29.926617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:52:56.779 [2024-07-22 15:05:29.926633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.779 [2024-07-22 15:05:29.926644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:52:56.779 [2024-07-22 15:05:29.926661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.779 [2024-07-22 15:05:29.926681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:52:56.779 [2024-07-22 15:05:29.926698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.779 [2024-07-22 15:05:29.926708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:52:56.779 [2024-07-22 15:05:29.926725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.779 [2024-07-22 15:05:29.926735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:52:56.779 [2024-07-22 15:05:29.926752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.779 [2024-07-22 15:05:29.926762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:52:56.779 [2024-07-22 15:05:29.926779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.779 [2024-07-22 15:05:29.926791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:56.779 [2024-07-22 15:05:29.926807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.779 [2024-07-22 15:05:29.926818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:52:56.779 [2024-07-22 15:05:29.926834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.779 [2024-07-22 15:05:29.926845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:52:56.779 [2024-07-22 15:05:29.926862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.779 [2024-07-22 15:05:29.926872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:52:56.779 [2024-07-22 15:05:29.926889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.779 [2024-07-22 15:05:29.926904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:52:56.779 [2024-07-22 15:05:29.927582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.779 [2024-07-22 15:05:29.927601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:52:56.780 [2024-07-22 15:05:29.927621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.780 [2024-07-22 15:05:29.927631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:52:56.780 [2024-07-22 15:05:29.927648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.780 [2024-07-22 15:05:29.927659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:52:56.780 [2024-07-22 15:05:29.927689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.780 [2024-07-22 15:05:29.927700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:52:56.780 [2024-07-22 15:05:29.927716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.780 [2024-07-22 15:05:29.927727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:52:56.780 [2024-07-22 15:05:29.927743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.780 [2024-07-22 15:05:29.927754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:52:56.780 [2024-07-22 15:05:29.927771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.780 [2024-07-22 15:05:29.927782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:52:56.780 [2024-07-22 15:05:29.927798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.780 [2024-07-22 15:05:29.927809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:52:56.780 [2024-07-22 15:05:29.927826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.780 [2024-07-22 15:05:29.927836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:52:56.780 [2024-07-22 15:05:29.927853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.780 [2024-07-22 15:05:29.927863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:52:56.780 [2024-07-22 15:05:29.927881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.780 [2024-07-22 15:05:29.927892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:52:56.780 [2024-07-22 15:05:29.927908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.780 [2024-07-22 15:05:29.927929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:52:56.780 [2024-07-22 15:05:29.927946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.780 [2024-07-22 15:05:29.927956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:52:56.780 [2024-07-22 15:05:29.927973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.780 [2024-07-22 15:05:29.927983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:52:56.780 [2024-07-22 15:05:29.928000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.780 [2024-07-22 15:05:29.928011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:52:56.780 [2024-07-22 15:05:29.928027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.780 [2024-07-22 15:05:29.928038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:52:56.780 [2024-07-22 15:05:29.928054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.780 [2024-07-22 15:05:29.928065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:52:56.780 [2024-07-22 15:05:29.928081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.780 [2024-07-22 15:05:29.928092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:52:56.780 [2024-07-22 15:05:29.928108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.780 [2024-07-22 15:05:29.928119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:52:56.780 [2024-07-22 15:05:29.928135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.780 [2024-07-22 15:05:29.928145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:52:56.780 [2024-07-22 15:05:29.928162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.780 [2024-07-22 15:05:29.928173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:52:56.780 [2024-07-22 15:05:29.928189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.780 [2024-07-22 15:05:29.928200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:52:56.780 [2024-07-22 15:05:29.928216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.780 [2024-07-22 15:05:29.928227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:52:56.780 [2024-07-22 15:05:29.928243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.780 [2024-07-22 15:05:29.928254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:52:56.780 [2024-07-22 15:05:29.928275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.780 [2024-07-22 15:05:29.928286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:52:56.780 [2024-07-22 15:05:29.928302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.780 [2024-07-22 15:05:29.928313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:52:56.780 [2024-07-22 15:05:29.928329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.780 [2024-07-22 15:05:29.928340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:52:56.780 [2024-07-22 15:05:29.928356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.780 [2024-07-22 15:05:29.928367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:56.780 [2024-07-22 15:05:29.928383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.780 [2024-07-22 15:05:29.928394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:52:56.780 [2024-07-22 15:05:29.928410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.780 [2024-07-22 15:05:29.928421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:52:56.780 [2024-07-22 15:05:29.928437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.780 [2024-07-22 15:05:29.928448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:52:56.780 [2024-07-22 15:05:29.928464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.780 [2024-07-22 15:05:29.928475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:52:56.780 [2024-07-22 15:05:29.928492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.780 [2024-07-22 15:05:29.928502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:52:56.780 [2024-07-22 15:05:29.928519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.780 [2024-07-22 15:05:29.928529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:52:56.780 [2024-07-22 15:05:29.928546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.780 [2024-07-22 15:05:29.928557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:52:56.780 [2024-07-22 15:05:29.928573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.780 [2024-07-22 15:05:29.928584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:52:56.780 [2024-07-22 15:05:29.928614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.780 [2024-07-22 15:05:29.928625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:52:56.780 [2024-07-22 15:05:29.928642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.780 [2024-07-22 15:05:29.928653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:52:56.780 [2024-07-22 15:05:29.928679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.780 [2024-07-22 15:05:29.928691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:52:56.780 [2024-07-22 15:05:29.928708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.780 [2024-07-22 15:05:29.928718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:52:56.781 [2024-07-22 15:05:29.928735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.781 [2024-07-22 15:05:29.928745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:52:56.781 [2024-07-22 15:05:29.928762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.781 [2024-07-22 15:05:29.928772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:52:56.781 [2024-07-22 15:05:29.928789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.781 [2024-07-22 15:05:29.928800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:52:56.781 [2024-07-22 15:05:29.928816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.781 [2024-07-22 15:05:29.928827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:52:56.781 [2024-07-22 15:05:29.928844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.781 [2024-07-22 15:05:29.928854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:52:56.781 [2024-07-22 15:05:29.928871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.781 [2024-07-22 15:05:29.928881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:52:56.781 [2024-07-22 15:05:29.928898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.781 [2024-07-22 15:05:29.928909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:52:56.781 [2024-07-22 15:05:29.928925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.781 [2024-07-22 15:05:29.928936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:52:56.781 [2024-07-22 15:05:29.928952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.781 [2024-07-22 15:05:29.928968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:52:56.781 [2024-07-22 15:05:29.928984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.781 [2024-07-22 15:05:29.928995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:52:56.781 [2024-07-22 15:05:29.929012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.781 [2024-07-22 15:05:29.929022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:52:56.781 [2024-07-22 15:05:29.929039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.781 [2024-07-22 15:05:29.929050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:52:56.781 [2024-07-22 15:05:29.929066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.781 [2024-07-22 15:05:29.929077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:52:56.781 [2024-07-22 15:05:29.929093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.781 [2024-07-22 15:05:29.929104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:52:56.781 [2024-07-22 15:05:29.929120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.781 [2024-07-22 15:05:29.929131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:52:56.781 [2024-07-22 15:05:29.929728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.781 [2024-07-22 15:05:29.929748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:52:56.781 [2024-07-22 15:05:29.929769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.781 [2024-07-22 15:05:29.929780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:52:56.781 [2024-07-22 15:05:29.929798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.781 [2024-07-22 15:05:29.929810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.781 [2024-07-22 15:05:29.929828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.781 [2024-07-22 15:05:29.929840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:56.781 [2024-07-22 15:05:29.929858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.781 [2024-07-22 15:05:29.929869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:56.781 [2024-07-22 15:05:29.929887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.781 [2024-07-22 15:05:29.929907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:52:56.781 [2024-07-22 15:05:29.929925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.781 [2024-07-22 15:05:29.929936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:52:56.781 [2024-07-22 15:05:29.929953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.781 [2024-07-22 15:05:29.929965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:52:56.781 [2024-07-22 15:05:29.929983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.781 [2024-07-22 15:05:29.929994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:52:56.781 [2024-07-22 15:05:29.930012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.781 [2024-07-22 15:05:29.930023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:52:56.781 [2024-07-22 15:05:29.930041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.781 [2024-07-22 15:05:29.930053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:52:56.781 [2024-07-22 15:05:29.930071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.781 [2024-07-22 15:05:29.930082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:52:56.781 [2024-07-22 15:05:29.930100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:96376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.781 [2024-07-22 15:05:29.930111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:52:56.781 [2024-07-22 15:05:29.930129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.781 [2024-07-22 15:05:29.930140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:52:56.781 [2024-07-22 15:05:29.930158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.781 [2024-07-22 15:05:29.930170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:52:56.781 [2024-07-22 15:05:29.930188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.781 [2024-07-22 15:05:29.930199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:52:56.781 [2024-07-22 15:05:29.930217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.781 [2024-07-22 15:05:29.930229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:52:56.781 [2024-07-22 15:05:29.930246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.781 [2024-07-22 15:05:29.930258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:52:56.781 [2024-07-22 15:05:29.930281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.781 [2024-07-22 15:05:29.930292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:52:56.781 [2024-07-22 15:05:29.930310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:96432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.781 [2024-07-22 15:05:29.930322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:52:56.781 [2024-07-22 15:05:29.930339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.781 [2024-07-22 15:05:29.930351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:52:56.781 [2024-07-22 15:05:29.930369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:96448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.781 [2024-07-22 15:05:29.930381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:52:56.781 [2024-07-22 15:05:29.930398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.781 [2024-07-22 15:05:29.930410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:52:56.781 [2024-07-22 15:05:29.930428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.781 [2024-07-22 15:05:29.930439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:52:56.782 [2024-07-22 15:05:29.930457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:96472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.782 [2024-07-22 15:05:29.930468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:52:56.782 [2024-07-22 15:05:29.930486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.782 [2024-07-22 15:05:29.930498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:52:56.782 [2024-07-22 15:05:29.930516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.782 [2024-07-22 15:05:29.930527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:52:56.782 [2024-07-22 15:05:29.930545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.782 [2024-07-22 15:05:29.930556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:52:56.782 [2024-07-22 15:05:29.930574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.782 [2024-07-22 15:05:29.930585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:52:56.782 [2024-07-22 15:05:29.930603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:96512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.782 [2024-07-22 15:05:29.930614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:52:56.782 [2024-07-22 15:05:29.930637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.782 [2024-07-22 15:05:29.930649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:52:56.782 [2024-07-22 15:05:29.930677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:96528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.782 [2024-07-22 15:05:29.930689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:52:56.782 [2024-07-22 15:05:29.930707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.782 [2024-07-22 15:05:29.930718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:52:56.782 [2024-07-22 15:05:29.930736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.782 [2024-07-22 15:05:29.930747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:52:56.782 [2024-07-22 15:05:29.930765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.782 [2024-07-22 15:05:29.930776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:52:56.782 [2024-07-22 15:05:29.930793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.782 [2024-07-22 15:05:29.930805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:52:56.782 [2024-07-22 15:05:29.930822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.782 [2024-07-22 15:05:29.930834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:56.782 [2024-07-22 15:05:29.930851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.782 [2024-07-22 15:05:29.930862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:52:56.782 [2024-07-22 15:05:29.930880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.782 [2024-07-22 15:05:29.930891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:52:56.782 [2024-07-22 15:05:29.930909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.782 [2024-07-22 15:05:29.930920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:52:56.782 [2024-07-22 15:05:29.930938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.782 [2024-07-22 15:05:29.930949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:52:56.782 [2024-07-22 15:05:29.930966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.782 [2024-07-22 15:05:29.930978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:52:56.782 [2024-07-22 15:05:29.930995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.782 [2024-07-22 15:05:29.931011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:52:56.782 [2024-07-22 15:05:29.931029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.782 [2024-07-22 15:05:29.931040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:52:56.782 [2024-07-22 15:05:29.931058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.782 [2024-07-22 15:05:29.931069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:52:56.782 [2024-07-22 15:05:29.931086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.782 [2024-07-22 15:05:29.931098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:52:56.782 [2024-07-22 15:05:29.931115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.782 [2024-07-22 15:05:29.931126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:52:56.782 [2024-07-22 15:05:29.931144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.782 [2024-07-22 15:05:29.931155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:52:56.782 [2024-07-22 15:05:29.931173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.782 [2024-07-22 15:05:29.931184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:52:56.782 [2024-07-22 15:05:29.931201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.782 [2024-07-22 15:05:29.931212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:52:56.782 [2024-07-22 15:05:29.931230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.782 [2024-07-22 15:05:29.931241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:52:56.782 [2024-07-22 15:05:29.931258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.782 [2024-07-22 15:05:29.931270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:52:56.782 [2024-07-22 15:05:29.931287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.782 [2024-07-22 15:05:29.931298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:52:56.782 [2024-07-22 15:05:29.931316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.782 [2024-07-22 15:05:29.931327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:52:56.782 [2024-07-22 15:05:29.931345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.782 [2024-07-22 15:05:29.931361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:52:56.782 [2024-07-22 15:05:29.931379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.782 [2024-07-22 15:05:29.931391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:52:56.782 [2024-07-22 15:05:29.931409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.782 [2024-07-22 15:05:29.931420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:52:56.782 [2024-07-22 15:05:29.931437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.782 [2024-07-22 15:05:29.931449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:52:56.782 [2024-07-22 15:05:29.931467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.782 [2024-07-22 15:05:29.931478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:52:56.782 [2024-07-22 15:05:29.931496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.782 [2024-07-22 15:05:29.931508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:52:56.782 [2024-07-22 15:05:29.931525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.782 [2024-07-22 15:05:29.931537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:52:56.782 [2024-07-22 15:05:29.931555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.782 [2024-07-22 15:05:29.931566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:52:56.782 [2024-07-22 15:05:29.931584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.783 [2024-07-22 15:05:29.931596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:52:56.783 [2024-07-22 15:05:29.931613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.783 [2024-07-22 15:05:29.931625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:52:56.783 [2024-07-22 15:05:29.931643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.783 [2024-07-22 15:05:29.931654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:52:56.783 [2024-07-22 15:05:29.931680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.783 [2024-07-22 15:05:29.931692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:52:56.783 [2024-07-22 15:05:29.931710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.783 [2024-07-22 15:05:29.931721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:52:56.783 [2024-07-22 15:05:29.931744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.783 [2024-07-22 15:05:29.931756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:52:56.783 [2024-07-22 15:05:29.931774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.783 [2024-07-22 15:05:29.931785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:56.783 [2024-07-22 15:05:29.931803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.783 [2024-07-22 15:05:29.931814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:52:56.783 [2024-07-22 15:05:29.931832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.783 [2024-07-22 15:05:29.931843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:52:56.783 [2024-07-22 15:05:29.931861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.783 [2024-07-22 15:05:29.931873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:52:56.783 [2024-07-22 15:05:29.932576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.783 [2024-07-22 15:05:29.932607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:52:56.783 [2024-07-22 15:05:29.932629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.783 [2024-07-22 15:05:29.932640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:52:56.783 [2024-07-22 15:05:29.932659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.783 [2024-07-22 15:05:29.932683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:52:56.783 [2024-07-22 15:05:29.932701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.783 [2024-07-22 15:05:29.932713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:52:56.783 [2024-07-22 15:05:29.932731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.783 [2024-07-22 15:05:29.932742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:52:56.783 [2024-07-22 15:05:29.932761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.783 [2024-07-22 15:05:29.932772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:52:56.783 [2024-07-22 15:05:29.932790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.783 [2024-07-22 15:05:29.932801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:52:56.783 [2024-07-22 15:05:29.932827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.783 [2024-07-22 15:05:29.932839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:52:56.783 [2024-07-22 15:05:29.932856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.783 [2024-07-22 15:05:29.932868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:52:56.783 [2024-07-22 15:05:29.932886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.783 [2024-07-22 15:05:29.932897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:52:56.783 [2024-07-22 15:05:29.932915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.783 [2024-07-22 15:05:29.932926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:52:56.783 [2024-07-22 15:05:29.932944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.783 [2024-07-22 15:05:29.932956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:52:56.783 [2024-07-22 15:05:29.932974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.783 [2024-07-22 15:05:29.932986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:52:56.783 [2024-07-22 15:05:29.933005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.783 [2024-07-22 15:05:29.933016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:52:56.783 [2024-07-22 15:05:29.933034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.783 [2024-07-22 15:05:29.933046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:52:56.783 [2024-07-22 15:05:29.933063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.783 [2024-07-22 15:05:29.933075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:52:56.783 [2024-07-22 15:05:29.933092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.783 [2024-07-22 15:05:29.933104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:52:56.783 [2024-07-22 15:05:29.933122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.783 [2024-07-22 15:05:29.933133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:52:56.783 [2024-07-22 15:05:29.933151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.783 [2024-07-22 15:05:29.933162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:52:56.783 [2024-07-22 15:05:29.933180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.783 [2024-07-22 15:05:29.933196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:52:56.783 [2024-07-22 15:05:29.933214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.783 [2024-07-22 15:05:29.933226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:52:56.783 [2024-07-22 15:05:29.933243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.783 [2024-07-22 15:05:29.933255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:52:56.783 [2024-07-22 15:05:29.933273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.783 [2024-07-22 15:05:29.933284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:52:56.783 [2024-07-22 15:05:29.933302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.783 [2024-07-22 15:05:29.933313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:52:56.783 [2024-07-22 15:05:29.933331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.784 [2024-07-22 15:05:29.933343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:52:56.784 [2024-07-22 15:05:29.933361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.784 [2024-07-22 15:05:29.933372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:52:56.784 [2024-07-22 15:05:29.933390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.784 [2024-07-22 15:05:29.933401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:52:56.784 [2024-07-22 15:05:29.933420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.784 [2024-07-22 15:05:29.933431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:52:56.784 [2024-07-22 15:05:29.933449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.784 [2024-07-22 15:05:29.933460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:56.784 [2024-07-22 15:05:29.933478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.784 [2024-07-22 15:05:29.933490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:52:56.784 [2024-07-22 15:05:29.933508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.784 [2024-07-22 15:05:29.933519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:52:56.784 [2024-07-22 15:05:29.933537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.784 [2024-07-22 15:05:29.933553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:52:56.784 [2024-07-22 15:05:29.933571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.784 [2024-07-22 15:05:29.933583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:52:56.784 [2024-07-22 15:05:29.933601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.784 [2024-07-22 15:05:29.933612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:52:56.784 [2024-07-22 15:05:29.933630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.784 [2024-07-22 15:05:29.933641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:52:56.784 [2024-07-22 15:05:29.933659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.784 [2024-07-22 15:05:29.933681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:52:56.784 [2024-07-22 15:05:29.933699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.784 [2024-07-22 15:05:29.933710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:52:56.784 [2024-07-22 15:05:29.933728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.784 [2024-07-22 15:05:29.933740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:52:56.784 [2024-07-22 15:05:29.933757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.784 [2024-07-22 15:05:29.933769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:52:56.784 [2024-07-22 15:05:29.933786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.784 [2024-07-22 15:05:29.933798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:52:56.784 [2024-07-22 15:05:29.933815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.784 [2024-07-22 15:05:29.933827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:52:56.784 [2024-07-22 15:05:29.933845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.784 [2024-07-22 15:05:29.933856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:52:56.784 [2024-07-22 15:05:29.933874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.784 [2024-07-22 15:05:29.933885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:52:56.784 [2024-07-22 15:05:29.933903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.784 [2024-07-22 15:05:29.933914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:52:56.784 [2024-07-22 15:05:29.933937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.784 [2024-07-22 15:05:29.933949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:52:56.784 [2024-07-22 15:05:29.933967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.784 [2024-07-22 15:05:29.933978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:52:56.784 [2024-07-22 15:05:29.933997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.784 [2024-07-22 15:05:29.934008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:52:56.784 [2024-07-22 15:05:29.934026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.784 [2024-07-22 15:05:29.934037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:52:56.784 [2024-07-22 15:05:29.934055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.784 [2024-07-22 15:05:29.934067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:52:56.784 [2024-07-22 15:05:29.934085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.784 [2024-07-22 15:05:29.934096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:52:56.784 [2024-07-22 15:05:29.934114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.784 [2024-07-22 15:05:29.934125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:52:56.784 [2024-07-22 15:05:29.934143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.784 [2024-07-22 15:05:29.934154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:52:56.784 [2024-07-22 15:05:29.934172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.784 [2024-07-22 15:05:29.934183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:52:56.784 [2024-07-22 15:05:29.934201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.784 [2024-07-22 15:05:29.934213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:52:56.784 [2024-07-22 15:05:29.934231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.784 [2024-07-22 15:05:29.934242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:52:56.784 [2024-07-22 15:05:29.934850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.784 [2024-07-22 15:05:29.934870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:52:56.784 [2024-07-22 15:05:29.934899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.784 [2024-07-22 15:05:29.934911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:52:56.784 [2024-07-22 15:05:29.934929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.784 [2024-07-22 15:05:29.934941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:52:56.784 [2024-07-22 15:05:29.934959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.784 [2024-07-22 15:05:29.934970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.784 [2024-07-22 15:05:29.934988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.784 [2024-07-22 15:05:29.934999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:56.784 [2024-07-22 15:05:29.935017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.784 [2024-07-22 15:05:29.935030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:56.784 [2024-07-22 15:05:29.935048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.784 [2024-07-22 15:05:29.935059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:52:56.784 [2024-07-22 15:05:29.935077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.784 [2024-07-22 15:05:29.935088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:52:56.784 [2024-07-22 15:05:29.935106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.785 [2024-07-22 15:05:29.935118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:52:56.785 [2024-07-22 15:05:29.935135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.785 [2024-07-22 15:05:29.935147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:52:56.785 [2024-07-22 15:05:29.935164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.785 [2024-07-22 15:05:29.935176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:52:56.785 [2024-07-22 15:05:29.935194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.785 [2024-07-22 15:05:29.935206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:52:56.785 [2024-07-22 15:05:29.935224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.785 [2024-07-22 15:05:29.935235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:52:56.785 [2024-07-22 15:05:29.935253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:96376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.785 [2024-07-22 15:05:29.935272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:52:56.785 [2024-07-22 15:05:29.935290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.785 [2024-07-22 15:05:29.935301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:52:56.785 [2024-07-22 15:05:29.935319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.785 [2024-07-22 15:05:29.935331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:52:56.785 [2024-07-22 15:05:29.935349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.785 [2024-07-22 15:05:29.935360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:52:56.785 [2024-07-22 15:05:29.935378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.785 [2024-07-22 15:05:29.935389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:52:56.785 [2024-07-22 15:05:29.935407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.785 [2024-07-22 15:05:29.935419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:52:56.785 [2024-07-22 15:05:29.935437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.785 [2024-07-22 15:05:29.935449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:52:56.785 [2024-07-22 15:05:29.935467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:96432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.785 [2024-07-22 15:05:29.935478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:52:56.785 [2024-07-22 15:05:29.935496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.785 [2024-07-22 15:05:29.935507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:52:56.785 [2024-07-22 15:05:29.935525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.785 [2024-07-22 15:05:29.935537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:52:56.785 [2024-07-22 15:05:29.935555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:96456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.785 [2024-07-22 15:05:29.935566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:52:56.785 [2024-07-22 15:05:29.935584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.785 [2024-07-22 15:05:29.935595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:52:56.785 [2024-07-22 15:05:29.935613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.785 [2024-07-22 15:05:29.935629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:52:56.785 [2024-07-22 15:05:29.935647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.785 [2024-07-22 15:05:29.935659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:52:56.785 [2024-07-22 15:05:29.935688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.785 [2024-07-22 15:05:29.935708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:52:56.785 [2024-07-22 15:05:29.935725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.785 [2024-07-22 15:05:29.935737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:52:56.785 [2024-07-22 15:05:29.935755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.785 [2024-07-22 15:05:29.935766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:52:56.785 [2024-07-22 15:05:29.935784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.785 [2024-07-22 15:05:29.935795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:52:56.785 [2024-07-22 15:05:29.935813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.785 [2024-07-22 15:05:29.935825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:52:56.785 [2024-07-22 15:05:29.935843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.785 [2024-07-22 15:05:29.935854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:52:56.785 [2024-07-22 15:05:29.935872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.785 [2024-07-22 15:05:29.935884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:52:56.785 [2024-07-22 15:05:29.935902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.785 [2024-07-22 15:05:29.935913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:52:56.785 [2024-07-22 15:05:29.935931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.785 [2024-07-22 15:05:29.935942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:52:56.785 [2024-07-22 15:05:29.935960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.785 [2024-07-22 15:05:29.935972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:52:56.785 [2024-07-22 15:05:29.935990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.785 [2024-07-22 15:05:29.936001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:56.785 [2024-07-22 15:05:29.936024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.785 [2024-07-22 15:05:29.936036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:52:56.785 [2024-07-22 15:05:29.936054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.785 [2024-07-22 15:05:29.936065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:52:56.785 [2024-07-22 15:05:29.936083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.785 [2024-07-22 15:05:29.936094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:52:56.785 [2024-07-22 15:05:29.936112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.785 [2024-07-22 15:05:29.936123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:52:56.785 [2024-07-22 15:05:29.936141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.785 [2024-07-22 15:05:29.936152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:52:56.785 [2024-07-22 15:05:29.936171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.785 [2024-07-22 15:05:29.936182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:52:56.785 [2024-07-22 15:05:29.936200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.785 [2024-07-22 15:05:29.936211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:52:56.785 [2024-07-22 15:05:29.936229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.785 [2024-07-22 15:05:29.936240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:52:56.785 [2024-07-22 15:05:29.936258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.785 [2024-07-22 15:05:29.936269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:52:56.785 [2024-07-22 15:05:29.936287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.786 [2024-07-22 15:05:29.936298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:52:56.786 [2024-07-22 15:05:29.936316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.786 [2024-07-22 15:05:29.936327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:52:56.786 [2024-07-22 15:05:29.936345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.786 [2024-07-22 15:05:29.936356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:52:56.786 [2024-07-22 15:05:29.936386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.786 [2024-07-22 15:05:29.936398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:52:56.786 [2024-07-22 15:05:29.936416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.786 [2024-07-22 15:05:29.936427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:52:56.786 [2024-07-22 15:05:29.936445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.786 [2024-07-22 15:05:29.936456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:52:56.786 [2024-07-22 15:05:29.936474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.786 [2024-07-22 15:05:29.936485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:52:56.786 [2024-07-22 15:05:29.936503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.786 [2024-07-22 15:05:29.936514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:52:56.786 [2024-07-22 15:05:29.936532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.786 [2024-07-22 15:05:29.936543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:52:56.786 [2024-07-22 15:05:29.936561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.786 [2024-07-22 15:05:29.936573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:52:56.786 [2024-07-22 15:05:29.936590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.786 [2024-07-22 15:05:29.936610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:52:56.786 [2024-07-22 15:05:29.936628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.786 [2024-07-22 15:05:29.936640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:52:56.786 [2024-07-22 15:05:29.936658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.786 [2024-07-22 15:05:29.936679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:52:56.786 [2024-07-22 15:05:29.936698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.786 [2024-07-22 15:05:29.936710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:52:56.786 [2024-07-22 15:05:29.936727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.786 [2024-07-22 15:05:29.936738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:52:56.786 [2024-07-22 15:05:29.936756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.786 [2024-07-22 15:05:29.936773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:52:56.786 [2024-07-22 15:05:29.936791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.786 [2024-07-22 15:05:29.936803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:52:56.786 [2024-07-22 15:05:29.936821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.786 [2024-07-22 15:05:29.936832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:52:56.786 [2024-07-22 15:05:29.936850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.786 [2024-07-22 15:05:29.936862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:52:56.786 [2024-07-22 15:05:29.936880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.786 [2024-07-22 15:05:29.936891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:52:56.786 [2024-07-22 15:05:29.936909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.786 [2024-07-22 15:05:29.936920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:52:56.786 [2024-07-22 15:05:29.936938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.786 [2024-07-22 15:05:29.936949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:52:56.786 [2024-07-22 15:05:29.936967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.786 [2024-07-22 15:05:29.936979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:56.786 [2024-07-22 15:05:29.936997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.786 [2024-07-22 15:05:29.937008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:52:56.786 [2024-07-22 15:05:29.937027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.786 [2024-07-22 15:05:29.937038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:52:56.786 [2024-07-22 15:05:29.937775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.786 [2024-07-22 15:05:29.937796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:52:56.786 [2024-07-22 15:05:29.937817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.786 [2024-07-22 15:05:29.937829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:52:56.786 [2024-07-22 15:05:29.937848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.786 [2024-07-22 15:05:29.937869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:52:56.786 [2024-07-22 15:05:29.937887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.786 [2024-07-22 15:05:29.937899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:52:56.786 [2024-07-22 15:05:29.937917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.786 [2024-07-22 15:05:29.937928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:52:56.786 [2024-07-22 15:05:29.937946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.786 [2024-07-22 15:05:29.937957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:52:56.786 [2024-07-22 15:05:29.937975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.786 [2024-07-22 15:05:29.937986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:52:56.786 [2024-07-22 15:05:29.938004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.786 [2024-07-22 15:05:29.938015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:52:56.786 [2024-07-22 15:05:29.938033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.786 [2024-07-22 15:05:29.938045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:52:56.786 [2024-07-22 15:05:29.938063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.786 [2024-07-22 15:05:29.938074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:52:56.786 [2024-07-22 15:05:29.938092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.786 [2024-07-22 15:05:29.938103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:52:56.786 [2024-07-22 15:05:29.938122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.786 [2024-07-22 15:05:29.938134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:52:56.786 [2024-07-22 15:05:29.938151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.786 [2024-07-22 15:05:29.938163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:52:56.786 [2024-07-22 15:05:29.938180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.786 [2024-07-22 15:05:29.938192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:52:56.786 [2024-07-22 15:05:29.938210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.787 [2024-07-22 15:05:29.938221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:52:56.787 [2024-07-22 15:05:29.938244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.787 [2024-07-22 15:05:29.938255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:52:56.787 [2024-07-22 15:05:29.938273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.787 [2024-07-22 15:05:29.938284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:52:56.787 [2024-07-22 15:05:29.938302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.787 [2024-07-22 15:05:29.938314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:52:56.787 [2024-07-22 15:05:29.938332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.787 [2024-07-22 15:05:29.938343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:52:56.787 [2024-07-22 15:05:29.938361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.787 [2024-07-22 15:05:29.938372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:52:56.787 [2024-07-22 15:05:29.938390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.787 [2024-07-22 15:05:29.938401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:52:56.787 [2024-07-22 15:05:29.938419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.787 [2024-07-22 15:05:29.938430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:52:56.787 [2024-07-22 15:05:29.938448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.787 [2024-07-22 15:05:29.938459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:52:56.787 [2024-07-22 15:05:29.938477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.787 [2024-07-22 15:05:29.938488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:52:56.787 [2024-07-22 15:05:29.938507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.787 [2024-07-22 15:05:29.938518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:52:56.787 [2024-07-22 15:05:29.938536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.787 [2024-07-22 15:05:29.938547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:52:56.787 [2024-07-22 15:05:29.938565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.787 [2024-07-22 15:05:29.938576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:52:56.787 [2024-07-22 15:05:29.938599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.787 [2024-07-22 15:05:29.938611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:52:56.787 [2024-07-22 15:05:29.938629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.787 [2024-07-22 15:05:29.938640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:52:56.787 [2024-07-22 15:05:29.938658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.787 [2024-07-22 15:05:29.938679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:56.787 [2024-07-22 15:05:29.938698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.787 [2024-07-22 15:05:29.938709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:52:56.787 [2024-07-22 15:05:29.938727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.787 [2024-07-22 15:05:29.938738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:52:56.787 [2024-07-22 15:05:29.938756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.787 [2024-07-22 15:05:29.938767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:52:56.787 [2024-07-22 15:05:29.938785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.787 [2024-07-22 15:05:29.938797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:52:56.787 [2024-07-22 15:05:29.938814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.787 [2024-07-22 15:05:29.938825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:52:56.787 [2024-07-22 15:05:29.938844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.787 [2024-07-22 15:05:29.938855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:52:56.787 [2024-07-22 15:05:29.938873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.787 [2024-07-22 15:05:29.938884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:52:56.787 [2024-07-22 15:05:29.938902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.787 [2024-07-22 15:05:29.938913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:52:56.787 [2024-07-22 15:05:29.938931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.787 [2024-07-22 15:05:29.938942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:52:56.787 [2024-07-22 15:05:29.938960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.787 [2024-07-22 15:05:29.938976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:52:56.787 [2024-07-22 15:05:29.938994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.787 [2024-07-22 15:05:29.939006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:52:56.787 [2024-07-22 15:05:29.939023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.787 [2024-07-22 15:05:29.939035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:52:56.787 [2024-07-22 15:05:29.939052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.787 [2024-07-22 15:05:29.939064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:52:56.787 [2024-07-22 15:05:29.939082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.787 [2024-07-22 15:05:29.939093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:52:56.787 [2024-07-22 15:05:29.939111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.787 [2024-07-22 15:05:29.939122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:52:56.787 [2024-07-22 15:05:29.939140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.787 [2024-07-22 15:05:29.939152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:52:56.787 [2024-07-22 15:05:29.939170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.787 [2024-07-22 15:05:29.939182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:52:56.787 [2024-07-22 15:05:29.939199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.787 [2024-07-22 15:05:29.939211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:52:56.788 [2024-07-22 15:05:29.939229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.788 [2024-07-22 15:05:29.939240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:52:56.788 [2024-07-22 15:05:29.939258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.788 [2024-07-22 15:05:29.939269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:52:56.788 [2024-07-22 15:05:29.939287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.788 [2024-07-22 15:05:29.939298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:52:56.788 [2024-07-22 15:05:29.939316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.788 [2024-07-22 15:05:29.939333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:52:56.788 [2024-07-22 15:05:29.939351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.788 [2024-07-22 15:05:29.939362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:52:56.788 [2024-07-22 15:05:29.939380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.788 [2024-07-22 15:05:29.939392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:52:56.788 [2024-07-22 15:05:29.939410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.788 [2024-07-22 15:05:29.939421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:52:56.788 [2024-07-22 15:05:29.939928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.788 [2024-07-22 15:05:29.939945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:52:56.788 [2024-07-22 15:05:29.939960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.788 [2024-07-22 15:05:29.939968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:52:56.788 [2024-07-22 15:05:29.939982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.788 [2024-07-22 15:05:29.939990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:52:56.788 [2024-07-22 15:05:29.940003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.788 [2024-07-22 15:05:29.940011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:52:56.788 [2024-07-22 15:05:29.940024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.788 [2024-07-22 15:05:29.940032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.788 [2024-07-22 15:05:29.940045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.788 [2024-07-22 15:05:29.940053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:56.788 [2024-07-22 15:05:29.940067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.788 [2024-07-22 15:05:29.940075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:56.788 [2024-07-22 15:05:29.940088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.788 [2024-07-22 15:05:29.940097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:52:56.788 [2024-07-22 15:05:29.940110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.788 [2024-07-22 15:05:29.940118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:52:56.788 [2024-07-22 15:05:29.940137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.788 [2024-07-22 15:05:29.940145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:52:56.788 [2024-07-22 15:05:29.940158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.788 [2024-07-22 15:05:29.940166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:52:56.788 [2024-07-22 15:05:29.940179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.788 [2024-07-22 15:05:29.940188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:52:56.788 [2024-07-22 15:05:29.940200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.788 [2024-07-22 15:05:29.940209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:52:56.788 [2024-07-22 15:05:29.940222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.788 [2024-07-22 15:05:29.940231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:52:56.788 [2024-07-22 15:05:29.940244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:96376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.788 [2024-07-22 15:05:29.940252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:52:56.788 [2024-07-22 15:05:29.940265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.788 [2024-07-22 15:05:29.940273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:52:56.788 [2024-07-22 15:05:29.940286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.788 [2024-07-22 15:05:29.940294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:52:56.788 [2024-07-22 15:05:29.940307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.788 [2024-07-22 15:05:29.940316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:52:56.788 [2024-07-22 15:05:29.940329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.788 [2024-07-22 15:05:29.940337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:52:56.788 [2024-07-22 15:05:29.940350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.788 [2024-07-22 15:05:29.940358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:52:56.788 [2024-07-22 15:05:29.940371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.788 [2024-07-22 15:05:29.940379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:52:56.788 [2024-07-22 15:05:29.940396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:96432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.788 [2024-07-22 15:05:29.940405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:52:56.788 [2024-07-22 15:05:29.940418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.788 [2024-07-22 15:05:29.940426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:52:56.788 [2024-07-22 15:05:29.940439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.788 [2024-07-22 15:05:29.940447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:52:56.788 [2024-07-22 15:05:29.940460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:96456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.788 [2024-07-22 15:05:29.940468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:52:56.788 [2024-07-22 15:05:29.940481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.788 [2024-07-22 15:05:29.940490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:52:56.788 [2024-07-22 15:05:29.940503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.788 [2024-07-22 15:05:29.940511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:52:56.788 [2024-07-22 15:05:29.940524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:96480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.788 [2024-07-22 15:05:29.940532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:52:56.788 [2024-07-22 15:05:29.940545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.788 [2024-07-22 15:05:29.940554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:52:56.788 [2024-07-22 15:05:29.940566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.788 [2024-07-22 15:05:29.940575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:52:56.788 [2024-07-22 15:05:29.940589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:96504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.788 [2024-07-22 15:05:29.940604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:52:56.788 [2024-07-22 15:05:29.940617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:96512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.788 [2024-07-22 15:05:29.940642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:52:56.789 [2024-07-22 15:05:29.940656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.789 [2024-07-22 15:05:29.940665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:52:56.789 [2024-07-22 15:05:29.940678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.789 [2024-07-22 15:05:29.940701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:52:56.789 [2024-07-22 15:05:29.940715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.789 [2024-07-22 15:05:29.940724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:52:56.789 [2024-07-22 15:05:29.940738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.789 [2024-07-22 15:05:29.940746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:52:56.789 [2024-07-22 15:05:29.940760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.789 [2024-07-22 15:05:29.940768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:52:56.789 [2024-07-22 15:05:29.940782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.789 [2024-07-22 15:05:29.940791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:52:56.789 [2024-07-22 15:05:29.940804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.789 [2024-07-22 15:05:29.940813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:56.789 [2024-07-22 15:05:29.940827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.789 [2024-07-22 15:05:29.940836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:52:56.789 [2024-07-22 15:05:29.940850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.789 [2024-07-22 15:05:29.940858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:52:56.789 [2024-07-22 15:05:29.940872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.789 [2024-07-22 15:05:29.940881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:52:56.789 [2024-07-22 15:05:29.940894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.789 [2024-07-22 15:05:29.940903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:52:56.789 [2024-07-22 15:05:29.940917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.789 [2024-07-22 15:05:29.940925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:52:56.789 [2024-07-22 15:05:29.940939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.789 [2024-07-22 15:05:29.940948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:52:56.789 [2024-07-22 15:05:29.940961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.789 [2024-07-22 15:05:29.940974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:52:56.789 [2024-07-22 15:05:29.940988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.789 [2024-07-22 15:05:29.940996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:52:56.789 [2024-07-22 15:05:29.941010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.789 [2024-07-22 15:05:29.941019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:52:56.789 [2024-07-22 15:05:29.941033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.789 [2024-07-22 15:05:29.941042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:52:56.789 [2024-07-22 15:05:29.941056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.789 [2024-07-22 15:05:29.941064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:52:56.789 [2024-07-22 15:05:29.941078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.789 [2024-07-22 15:05:29.941087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:52:56.789 [2024-07-22 15:05:29.941100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.789 [2024-07-22 15:05:29.941109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:52:56.789 [2024-07-22 15:05:29.941123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.789 [2024-07-22 15:05:29.941132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:52:56.789 [2024-07-22 15:05:29.941145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.789 [2024-07-22 15:05:29.941154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:52:56.789 [2024-07-22 15:05:29.941168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.789 [2024-07-22 15:05:29.941177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:52:56.789 [2024-07-22 15:05:29.941191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.789 [2024-07-22 15:05:29.941200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:52:56.789 [2024-07-22 15:05:29.941213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.789 [2024-07-22 15:05:29.941222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:52:56.789 [2024-07-22 15:05:29.941236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.789 [2024-07-22 15:05:29.941244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:52:56.789 [2024-07-22 15:05:29.941262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.789 [2024-07-22 15:05:29.941271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:52:56.789 [2024-07-22 15:05:29.941284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.789 [2024-07-22 15:05:29.941293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:52:56.789 [2024-07-22 15:05:29.941307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.789 [2024-07-22 15:05:29.941316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:52:56.789 [2024-07-22 15:05:29.941329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.789 [2024-07-22 15:05:29.941338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:52:56.789 [2024-07-22 15:05:29.941352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.789 [2024-07-22 15:05:29.941361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:52:56.789 [2024-07-22 15:05:29.941375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.789 [2024-07-22 15:05:29.941383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:52:56.789 [2024-07-22 15:05:29.941397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:96320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.789 [2024-07-22 15:05:29.941405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:52:56.789 [2024-07-22 15:05:29.941420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.789 [2024-07-22 15:05:29.941428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:52:56.789 [2024-07-22 15:05:29.941442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.789 [2024-07-22 15:05:29.941451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:52:56.789 [2024-07-22 15:05:29.941465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.789 [2024-07-22 15:05:29.941473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:52:56.789 [2024-07-22 15:05:29.941489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.789 [2024-07-22 15:05:29.941498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:52:56.789 [2024-07-22 15:05:29.941512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.789 [2024-07-22 15:05:29.941520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:52:56.789 [2024-07-22 15:05:29.941540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.790 [2024-07-22 15:05:29.941549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:56.790 [2024-07-22 15:05:29.941563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.790 [2024-07-22 15:05:29.941572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:52:56.790 [2024-07-22 15:05:29.942122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.790 [2024-07-22 15:05:29.942139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:52:56.790 [2024-07-22 15:05:29.942154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.790 [2024-07-22 15:05:29.942163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:52:56.790 [2024-07-22 15:05:29.942176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.790 [2024-07-22 15:05:29.942185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:52:56.790 [2024-07-22 15:05:29.942198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.790 [2024-07-22 15:05:29.942206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:52:56.790 [2024-07-22 15:05:29.942219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.790 [2024-07-22 15:05:29.942227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:52:56.790 [2024-07-22 15:05:29.942240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.790 [2024-07-22 15:05:29.942249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:52:56.790 [2024-07-22 15:05:29.942261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.790 [2024-07-22 15:05:29.942270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:52:56.790 [2024-07-22 15:05:29.942283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.790 [2024-07-22 15:05:29.942291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:52:56.790 [2024-07-22 15:05:29.942304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.790 [2024-07-22 15:05:29.942312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:52:56.790 [2024-07-22 15:05:29.942325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.790 [2024-07-22 15:05:29.942333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:52:56.790 [2024-07-22 15:05:29.942346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.790 [2024-07-22 15:05:29.942361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:52:56.790 [2024-07-22 15:05:29.942374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.790 [2024-07-22 15:05:29.942382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:52:56.790 [2024-07-22 15:05:29.942396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.790 [2024-07-22 15:05:29.942404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:52:56.790 [2024-07-22 15:05:29.942417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.790 [2024-07-22 15:05:29.942425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:52:56.790 [2024-07-22 15:05:29.942438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.790 [2024-07-22 15:05:29.942446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:52:56.790 [2024-07-22 15:05:29.942459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.790 [2024-07-22 15:05:29.942468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:52:56.790 [2024-07-22 15:05:29.942481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.790 [2024-07-22 15:05:29.942489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:52:56.790 [2024-07-22 15:05:29.942502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.790 [2024-07-22 15:05:29.942510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:52:56.790 [2024-07-22 15:05:29.942523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.790 [2024-07-22 15:05:29.942531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:52:56.790 [2024-07-22 15:05:29.942544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.790 [2024-07-22 15:05:29.942552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:52:56.790 [2024-07-22 15:05:29.942565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.790 [2024-07-22 15:05:29.942573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:52:56.790 [2024-07-22 15:05:29.942586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.790 [2024-07-22 15:05:29.942594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:52:56.790 [2024-07-22 15:05:29.942607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.790 [2024-07-22 15:05:29.942619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:52:56.790 [2024-07-22 15:05:29.942632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.790 [2024-07-22 15:05:29.942640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:52:56.790 [2024-07-22 15:05:29.942653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.790 [2024-07-22 15:05:29.942661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:52:56.790 [2024-07-22 15:05:29.942673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.790 [2024-07-22 15:05:29.942691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:52:56.790 [2024-07-22 15:05:29.942704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.790 [2024-07-22 15:05:29.942713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:52:56.790 [2024-07-22 15:05:29.942725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.790 [2024-07-22 15:05:29.942734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:52:56.790 [2024-07-22 15:05:29.942747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.790 [2024-07-22 15:05:29.942755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:52:56.790 [2024-07-22 15:05:29.942768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.790 [2024-07-22 15:05:29.942776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:52:56.790 [2024-07-22 15:05:29.942789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.790 [2024-07-22 15:05:29.942797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:56.790 [2024-07-22 15:05:29.942811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.790 [2024-07-22 15:05:29.942819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:52:56.790 [2024-07-22 15:05:29.942832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.790 [2024-07-22 15:05:29.942840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:52:56.790 [2024-07-22 15:05:29.942853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.790 [2024-07-22 15:05:29.942861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:52:56.790 [2024-07-22 15:05:29.942874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.790 [2024-07-22 15:05:29.942882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:52:56.790 [2024-07-22 15:05:29.942899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.790 [2024-07-22 15:05:29.942907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:52:56.790 [2024-07-22 15:05:29.942920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.790 [2024-07-22 15:05:29.942928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:52:56.790 [2024-07-22 15:05:29.942941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.791 [2024-07-22 15:05:29.942949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:52:56.791 [2024-07-22 15:05:29.942962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.791 [2024-07-22 15:05:29.942971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:52:56.791 [2024-07-22 15:05:29.942984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.791 [2024-07-22 15:05:29.942992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:52:56.791 [2024-07-22 15:05:29.943004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.791 [2024-07-22 15:05:29.943013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:52:56.791 [2024-07-22 15:05:29.943026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.791 [2024-07-22 15:05:29.943034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:52:56.791 [2024-07-22 15:05:29.943047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.791 [2024-07-22 15:05:29.943055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:52:56.791 [2024-07-22 15:05:29.943068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.791 [2024-07-22 15:05:29.943076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:52:56.791 [2024-07-22 15:05:29.943089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.791 [2024-07-22 15:05:29.943097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:52:56.791 [2024-07-22 15:05:29.943110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.791 [2024-07-22 15:05:29.943118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:52:56.791 [2024-07-22 15:05:29.943131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.791 [2024-07-22 15:05:29.943140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:52:56.791 [2024-07-22 15:05:29.943157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.791 [2024-07-22 15:05:29.943165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:52:56.791 [2024-07-22 15:05:29.943178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.791 [2024-07-22 15:05:29.943186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:52:56.791 [2024-07-22 15:05:29.943199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.791 [2024-07-22 15:05:29.943207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:52:56.791 [2024-07-22 15:05:29.943220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.791 [2024-07-22 15:05:29.943228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:52:56.791 [2024-07-22 15:05:29.943241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.791 [2024-07-22 15:05:29.943249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:52:56.791 [2024-07-22 15:05:29.943262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.791 [2024-07-22 15:05:29.943270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:52:56.791 [2024-07-22 15:05:29.943283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.791 [2024-07-22 15:05:29.943291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:52:56.791 [2024-07-22 15:05:29.943304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.791 [2024-07-22 15:05:29.943313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:52:56.791 [2024-07-22 15:05:29.943761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.791 [2024-07-22 15:05:29.943776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:52:56.791 [2024-07-22 15:05:29.943791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.791 [2024-07-22 15:05:29.943800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:52:56.791 [2024-07-22 15:05:29.943813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.791 [2024-07-22 15:05:29.943822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:52:56.791 [2024-07-22 15:05:29.943834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.791 [2024-07-22 15:05:29.943843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:52:56.791 [2024-07-22 15:05:29.943856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.791 [2024-07-22 15:05:29.943871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:52:56.791 [2024-07-22 15:05:29.943884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.791 [2024-07-22 15:05:29.943892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.791 [2024-07-22 15:05:29.943905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.791 [2024-07-22 15:05:29.943913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:56.791 [2024-07-22 15:05:29.943926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.791 [2024-07-22 15:05:29.943935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:56.791 [2024-07-22 15:05:29.943948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.791 [2024-07-22 15:05:29.943956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:52:56.791 [2024-07-22 15:05:29.943969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.791 [2024-07-22 15:05:29.943977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:52:56.791 [2024-07-22 15:05:29.943990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.791 [2024-07-22 15:05:29.943999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:52:56.791 [2024-07-22 15:05:29.944012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.791 [2024-07-22 15:05:29.944020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:52:56.791 [2024-07-22 15:05:29.944033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.791 [2024-07-22 15:05:29.944041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:52:56.791 [2024-07-22 15:05:29.944054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.791 [2024-07-22 15:05:29.944062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:52:56.791 [2024-07-22 15:05:29.944075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.791 [2024-07-22 15:05:29.944084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:52:56.791 [2024-07-22 15:05:29.944097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.791 [2024-07-22 15:05:29.944105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:52:56.791 [2024-07-22 15:05:29.944118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.791 [2024-07-22 15:05:29.944130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:52:56.791 [2024-07-22 15:05:29.944143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.791 [2024-07-22 15:05:29.944151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:52:56.791 [2024-07-22 15:05:29.944164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.791 [2024-07-22 15:05:29.944173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:52:56.791 [2024-07-22 15:05:29.944186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.791 [2024-07-22 15:05:29.944194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:52:56.791 [2024-07-22 15:05:29.944207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.791 [2024-07-22 15:05:29.944215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:52:56.791 [2024-07-22 15:05:29.944228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.792 [2024-07-22 15:05:29.944237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:52:56.792 [2024-07-22 15:05:29.944249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.792 [2024-07-22 15:05:29.944258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:52:56.792 [2024-07-22 15:05:29.944271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.792 [2024-07-22 15:05:29.944279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:52:56.792 [2024-07-22 15:05:29.944292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.792 [2024-07-22 15:05:29.944300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:52:56.792 [2024-07-22 15:05:29.944313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.792 [2024-07-22 15:05:29.944321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:52:56.792 [2024-07-22 15:05:29.944334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.792 [2024-07-22 15:05:29.944342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:52:56.792 [2024-07-22 15:05:29.944355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.792 [2024-07-22 15:05:29.944363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:52:56.792 [2024-07-22 15:05:29.944376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.792 [2024-07-22 15:05:29.944385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:52:56.792 [2024-07-22 15:05:29.944401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.792 [2024-07-22 15:05:29.944410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:52:56.792 [2024-07-22 15:05:29.944423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.792 [2024-07-22 15:05:29.944431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:52:56.792 [2024-07-22 15:05:29.944444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.792 [2024-07-22 15:05:29.944453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:52:56.792 [2024-07-22 15:05:29.944465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.792 [2024-07-22 15:05:29.944473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:52:56.792 [2024-07-22 15:05:29.944486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.792 [2024-07-22 15:05:29.944495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:52:56.792 [2024-07-22 15:05:29.944508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:96528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.792 [2024-07-22 15:05:29.944516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:52:56.792 [2024-07-22 15:05:29.944529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.792 [2024-07-22 15:05:29.944537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:52:56.792 [2024-07-22 15:05:29.944550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.792 [2024-07-22 15:05:29.944558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:52:56.792 [2024-07-22 15:05:29.944571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.792 [2024-07-22 15:05:29.944579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:52:56.792 [2024-07-22 15:05:29.944592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.792 [2024-07-22 15:05:29.944608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:52:56.792 [2024-07-22 15:05:29.944622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.792 [2024-07-22 15:05:29.944630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:56.792 [2024-07-22 15:05:29.944643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.792 [2024-07-22 15:05:29.944652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:52:56.792 [2024-07-22 15:05:29.944678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.792 [2024-07-22 15:05:29.944688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:52:56.792 [2024-07-22 15:05:29.944701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.792 [2024-07-22 15:05:29.944710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:52:56.792 [2024-07-22 15:05:29.944723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.792 [2024-07-22 15:05:29.944731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:52:56.792 [2024-07-22 15:05:29.944744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.792 [2024-07-22 15:05:29.944752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:52:56.792 [2024-07-22 15:05:29.944765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.792 [2024-07-22 15:05:29.944773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:52:56.792 [2024-07-22 15:05:29.944786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.792 [2024-07-22 15:05:29.944795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:52:56.792 [2024-07-22 15:05:29.944808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.792 [2024-07-22 15:05:29.944816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:52:56.792 [2024-07-22 15:05:29.944829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.792 [2024-07-22 15:05:29.944837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:52:56.792 [2024-07-22 15:05:29.944850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.792 [2024-07-22 15:05:29.944858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:52:56.792 [2024-07-22 15:05:29.944871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.792 [2024-07-22 15:05:29.944879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:52:56.792 [2024-07-22 15:05:29.944892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.792 [2024-07-22 15:05:29.944900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:52:56.792 [2024-07-22 15:05:29.944913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.792 [2024-07-22 15:05:29.944921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:52:56.792 [2024-07-22 15:05:29.944934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.792 [2024-07-22 15:05:29.944946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:52:56.793 [2024-07-22 15:05:29.944960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.793 [2024-07-22 15:05:29.944968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:52:56.793 [2024-07-22 15:05:29.944981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.793 [2024-07-22 15:05:29.944989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:52:56.793 [2024-07-22 15:05:29.945002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.793 [2024-07-22 15:05:29.945010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:52:56.793 [2024-07-22 15:05:29.945023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.793 [2024-07-22 15:05:29.945031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:52:56.793 [2024-07-22 15:05:29.945044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.793 [2024-07-22 15:05:29.945053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:52:56.793 [2024-07-22 15:05:29.945066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.793 [2024-07-22 15:05:29.945074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:52:56.793 [2024-07-22 15:05:29.945087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.793 [2024-07-22 15:05:29.945095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:52:56.793 [2024-07-22 15:05:29.945108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.793 [2024-07-22 15:05:29.945116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:52:56.793 [2024-07-22 15:05:29.945129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.793 [2024-07-22 15:05:29.945137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:52:56.793 [2024-07-22 15:05:29.945150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.793 [2024-07-22 15:05:29.945159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:52:56.793 [2024-07-22 15:05:29.945171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.793 [2024-07-22 15:05:29.945180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:52:56.793 [2024-07-22 15:05:29.945193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.793 [2024-07-22 15:05:29.945205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:52:56.793 [2024-07-22 15:05:29.945218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.793 [2024-07-22 15:05:29.945226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:52:56.793 [2024-07-22 15:05:29.945239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.793 [2024-07-22 15:05:29.945248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:52:56.793 [2024-07-22 15:05:29.945261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.793 [2024-07-22 15:05:29.945269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:52:56.793 [2024-07-22 15:05:29.945282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.793 [2024-07-22 15:05:29.945290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:52:56.793 [2024-07-22 15:05:29.945303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.793 [2024-07-22 15:05:29.945312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:52:56.793 [2024-07-22 15:05:29.945873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.793 [2024-07-22 15:05:29.945888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:56.793 [2024-07-22 15:05:29.945903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.793 [2024-07-22 15:05:29.945912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:52:56.793 [2024-07-22 15:05:29.945925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.793 [2024-07-22 15:05:29.945934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:52:56.793 [2024-07-22 15:05:29.945947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.793 [2024-07-22 15:05:29.945955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:52:56.793 [2024-07-22 15:05:29.945968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.793 [2024-07-22 15:05:29.945976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:52:56.793 [2024-07-22 15:05:29.945989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.793 [2024-07-22 15:05:29.945998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:52:56.793 [2024-07-22 15:05:29.946010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.793 [2024-07-22 15:05:29.946027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:52:56.793 [2024-07-22 15:05:29.946041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.793 [2024-07-22 15:05:29.946049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:52:56.793 [2024-07-22 15:05:29.946062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.793 [2024-07-22 15:05:29.946070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:52:56.793 [2024-07-22 15:05:29.946083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.793 [2024-07-22 15:05:29.946091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:52:56.793 [2024-07-22 15:05:29.946104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.793 [2024-07-22 15:05:29.946112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:52:56.793 [2024-07-22 15:05:29.946125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.793 [2024-07-22 15:05:29.946133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:52:56.793 [2024-07-22 15:05:29.946146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.793 [2024-07-22 15:05:29.946154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:52:56.793 [2024-07-22 15:05:29.946167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.793 [2024-07-22 15:05:29.946175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:52:56.793 [2024-07-22 15:05:29.946188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.793 [2024-07-22 15:05:29.946196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:52:56.793 [2024-07-22 15:05:29.946209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.793 [2024-07-22 15:05:29.946217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:52:56.793 [2024-07-22 15:05:29.946230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.793 [2024-07-22 15:05:29.946239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:52:56.793 [2024-07-22 15:05:29.946252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.793 [2024-07-22 15:05:29.946260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:52:56.793 [2024-07-22 15:05:29.946273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.793 [2024-07-22 15:05:29.946281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:52:56.793 [2024-07-22 15:05:29.946298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.793 [2024-07-22 15:05:29.946306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:52:56.793 [2024-07-22 15:05:29.946319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.794 [2024-07-22 15:05:29.946327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:52:56.794 [2024-07-22 15:05:29.946340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.794 [2024-07-22 15:05:29.946348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:52:56.794 [2024-07-22 15:05:29.946361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.794 [2024-07-22 15:05:29.946369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:52:56.794 [2024-07-22 15:05:29.946382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.794 [2024-07-22 15:05:29.946391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:52:56.794 [2024-07-22 15:05:29.946403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.794 [2024-07-22 15:05:29.946411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:52:56.794 [2024-07-22 15:05:29.946424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.794 [2024-07-22 15:05:29.946433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:52:56.794 [2024-07-22 15:05:29.946446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.794 [2024-07-22 15:05:29.946454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:52:56.794 [2024-07-22 15:05:29.946467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.794 [2024-07-22 15:05:29.946475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:52:56.794 [2024-07-22 15:05:29.946488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.794 [2024-07-22 15:05:29.946496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:52:56.794 [2024-07-22 15:05:29.946509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.794 [2024-07-22 15:05:29.946518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:52:56.794 [2024-07-22 15:05:29.946531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.794 [2024-07-22 15:05:29.946539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:52:56.794 [2024-07-22 15:05:29.946556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.794 [2024-07-22 15:05:29.946564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:52:56.794 [2024-07-22 15:05:29.946577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.794 [2024-07-22 15:05:29.946585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:56.794 [2024-07-22 15:05:29.946599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.794 [2024-07-22 15:05:29.946607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:52:56.794 [2024-07-22 15:05:29.946620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.794 [2024-07-22 15:05:29.946628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:52:56.794 [2024-07-22 15:05:29.946641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.794 [2024-07-22 15:05:29.946650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:52:56.794 [2024-07-22 15:05:29.946662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.794 [2024-07-22 15:05:29.946671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:52:56.794 [2024-07-22 15:05:29.946684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.794 [2024-07-22 15:05:29.946692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:52:56.794 [2024-07-22 15:05:29.946715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.794 [2024-07-22 15:05:29.946723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:52:56.794 [2024-07-22 15:05:29.946736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.794 [2024-07-22 15:05:29.946744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:52:56.794 [2024-07-22 15:05:29.946757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.794 [2024-07-22 15:05:29.946765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:52:56.794 [2024-07-22 15:05:29.946778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.794 [2024-07-22 15:05:29.946786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:52:56.794 [2024-07-22 15:05:29.946799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.794 [2024-07-22 15:05:29.946807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:52:56.794 [2024-07-22 15:05:29.946820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.794 [2024-07-22 15:05:29.946833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:52:56.794 [2024-07-22 15:05:29.946846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.794 [2024-07-22 15:05:29.946854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:52:56.794 [2024-07-22 15:05:29.946867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.794 [2024-07-22 15:05:29.946875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:52:56.794 [2024-07-22 15:05:29.946888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.794 [2024-07-22 15:05:29.946897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:52:56.794 [2024-07-22 15:05:29.946910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.794 [2024-07-22 15:05:29.946918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:52:56.794 [2024-07-22 15:05:29.946931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.794 [2024-07-22 15:05:29.946939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:52:56.794 [2024-07-22 15:05:29.946952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.794 [2024-07-22 15:05:29.946960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:52:56.794 [2024-07-22 15:05:29.946973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.794 [2024-07-22 15:05:29.946981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:52:56.794 [2024-07-22 15:05:29.946994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.794 [2024-07-22 15:05:29.947002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:52:56.794 [2024-07-22 15:05:29.947015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.794 [2024-07-22 15:05:29.947023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:52:56.794 [2024-07-22 15:05:29.947036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.794 [2024-07-22 15:05:29.947044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:52:56.794 [2024-07-22 15:05:29.947057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.794 [2024-07-22 15:05:29.947066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:52:56.794 [2024-07-22 15:05:29.947079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.794 [2024-07-22 15:05:29.947092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:52:56.794 [2024-07-22 15:05:29.947557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.794 [2024-07-22 15:05:29.947573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:52:56.794 [2024-07-22 15:05:29.947588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.794 [2024-07-22 15:05:29.947597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:52:56.794 [2024-07-22 15:05:29.947610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.795 [2024-07-22 15:05:29.947619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:52:56.795 [2024-07-22 15:05:29.947632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.795 [2024-07-22 15:05:29.947640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:52:56.795 [2024-07-22 15:05:29.947653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.795 [2024-07-22 15:05:29.947661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:52:56.795 [2024-07-22 15:05:29.947684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.795 [2024-07-22 15:05:29.947693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:52:56.795 [2024-07-22 15:05:29.947705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.795 [2024-07-22 15:05:29.947714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.795 [2024-07-22 15:05:29.947727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.795 [2024-07-22 15:05:29.947735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:56.795 [2024-07-22 15:05:29.947749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.795 [2024-07-22 15:05:29.947757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:56.795 [2024-07-22 15:05:29.947770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.795 [2024-07-22 15:05:29.947779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:52:56.795 [2024-07-22 15:05:29.947791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.795 [2024-07-22 15:05:29.947800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:52:56.795 [2024-07-22 15:05:29.947813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.795 [2024-07-22 15:05:29.947821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:52:56.795 [2024-07-22 15:05:29.947840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.795 [2024-07-22 15:05:29.947849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:52:56.795 [2024-07-22 15:05:29.947862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.795 [2024-07-22 15:05:29.947871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:52:56.795 [2024-07-22 15:05:29.947884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.795 [2024-07-22 15:05:29.947892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:52:56.795 [2024-07-22 15:05:29.947905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.795 [2024-07-22 15:05:29.947913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:52:56.795 [2024-07-22 15:05:29.947927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.795 [2024-07-22 15:05:29.947935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:52:56.795 [2024-07-22 15:05:29.947948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.795 [2024-07-22 15:05:29.947956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:52:56.795 [2024-07-22 15:05:29.947969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.795 [2024-07-22 15:05:29.947977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:52:56.795 [2024-07-22 15:05:29.947990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.795 [2024-07-22 15:05:29.947999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:52:56.795 [2024-07-22 15:05:29.948012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.795 [2024-07-22 15:05:29.948020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:52:56.795 [2024-07-22 15:05:29.948033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.795 [2024-07-22 15:05:29.948041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:52:56.795 [2024-07-22 15:05:29.948054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.795 [2024-07-22 15:05:29.948063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:52:56.795 [2024-07-22 15:05:29.948076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:96432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.795 [2024-07-22 15:05:29.948085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:52:56.795 [2024-07-22 15:05:29.948101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.795 [2024-07-22 15:05:29.948110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:52:56.795 [2024-07-22 15:05:29.948123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.795 [2024-07-22 15:05:29.948131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:52:56.795 [2024-07-22 15:05:29.948144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:96456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.795 [2024-07-22 15:05:29.948153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:52:56.795 [2024-07-22 15:05:29.948166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.795 [2024-07-22 15:05:29.948174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:52:56.795 [2024-07-22 15:05:29.948187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:96472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.795 [2024-07-22 15:05:29.948195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:52:56.795 [2024-07-22 15:05:29.948208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:96480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.795 [2024-07-22 15:05:29.948217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:52:56.795 [2024-07-22 15:05:29.948229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.795 [2024-07-22 15:05:29.948238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:52:56.795 [2024-07-22 15:05:29.948251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.795 [2024-07-22 15:05:29.948260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:52:56.795 [2024-07-22 15:05:29.948273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.795 [2024-07-22 15:05:29.948281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:52:56.795 [2024-07-22 15:05:29.948294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:96512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.795 [2024-07-22 15:05:29.948302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:52:56.795 [2024-07-22 15:05:29.948315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.795 [2024-07-22 15:05:29.948323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:52:56.795 [2024-07-22 15:05:29.948336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.795 [2024-07-22 15:05:29.948344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:52:56.795 [2024-07-22 15:05:29.948357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.795 [2024-07-22 15:05:29.948369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:52:56.795 [2024-07-22 15:05:29.948383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.795 [2024-07-22 15:05:29.948391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:52:56.795 [2024-07-22 15:05:29.948404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.795 [2024-07-22 15:05:29.948412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:52:56.795 [2024-07-22 15:05:29.948425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.795 [2024-07-22 15:05:29.948434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:52:56.795 [2024-07-22 15:05:29.948446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.795 [2024-07-22 15:05:29.948457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:56.796 [2024-07-22 15:05:29.948470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.796 [2024-07-22 15:05:29.948478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:52:56.796 [2024-07-22 15:05:29.948491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.796 [2024-07-22 15:05:29.948500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:52:56.796 [2024-07-22 15:05:29.948512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.796 [2024-07-22 15:05:29.948521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:52:56.796 [2024-07-22 15:05:29.948533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.796 [2024-07-22 15:05:29.948542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:52:56.796 [2024-07-22 15:05:29.948558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.796 [2024-07-22 15:05:29.948567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:52:56.796 [2024-07-22 15:05:29.948580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.796 [2024-07-22 15:05:29.948588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:52:56.796 [2024-07-22 15:05:29.948608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.796 [2024-07-22 15:05:29.948616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:52:56.796 [2024-07-22 15:05:29.948629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.796 [2024-07-22 15:05:29.948641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:52:56.796 [2024-07-22 15:05:29.948655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.796 [2024-07-22 15:05:29.948663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:52:56.796 [2024-07-22 15:05:29.948685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.796 [2024-07-22 15:05:29.948694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:52:56.796 [2024-07-22 15:05:29.948707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.796 [2024-07-22 15:05:29.948716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:52:56.796 [2024-07-22 15:05:29.948729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.796 [2024-07-22 15:05:29.948737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:52:56.796 [2024-07-22 15:05:29.948750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.796 [2024-07-22 15:05:29.948758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:52:56.796 [2024-07-22 15:05:29.948771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.796 [2024-07-22 15:05:29.948779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:52:56.796 [2024-07-22 15:05:29.948792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.796 [2024-07-22 15:05:29.948801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:52:56.796 [2024-07-22 15:05:29.948814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.796 [2024-07-22 15:05:29.948823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:52:56.796 [2024-07-22 15:05:29.948836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.796 [2024-07-22 15:05:29.948844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:52:56.796 [2024-07-22 15:05:29.948857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.796 [2024-07-22 15:05:29.948865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:52:56.796 [2024-07-22 15:05:29.948878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.796 [2024-07-22 15:05:29.948887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:52:56.796 [2024-07-22 15:05:29.948899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.796 [2024-07-22 15:05:29.948908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:52:56.796 [2024-07-22 15:05:29.948926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.796 [2024-07-22 15:05:29.948934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:52:56.796 [2024-07-22 15:05:29.948947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.796 [2024-07-22 15:05:29.948955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:52:56.796 [2024-07-22 15:05:29.948968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.796 [2024-07-22 15:05:29.948976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:52:56.796 [2024-07-22 15:05:29.948989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.796 [2024-07-22 15:05:29.948997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:52:56.796 [2024-07-22 15:05:29.949010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.796 [2024-07-22 15:05:29.949019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:52:56.796 [2024-07-22 15:05:29.949032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:96320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.796 [2024-07-22 15:05:29.949040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:52:56.796 [2024-07-22 15:05:29.949053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.796 [2024-07-22 15:05:29.949061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:52:56.796 [2024-07-22 15:05:29.949076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.796 [2024-07-22 15:05:29.949084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:52:56.796 [2024-07-22 15:05:29.949098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.796 [2024-07-22 15:05:29.949106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:52:56.796 [2024-07-22 15:05:29.949119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.796 [2024-07-22 15:05:29.949127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:52:56.796 [2024-07-22 15:05:29.949634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.796 [2024-07-22 15:05:29.949649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:52:56.796 [2024-07-22 15:05:29.949664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.796 [2024-07-22 15:05:29.949683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:56.797 [2024-07-22 15:05:29.949703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.797 [2024-07-22 15:05:29.949711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:52:56.797 [2024-07-22 15:05:29.949725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.797 [2024-07-22 15:05:29.949733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:52:56.797 [2024-07-22 15:05:29.949746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.797 [2024-07-22 15:05:29.949754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:52:56.797 [2024-07-22 15:05:29.949767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.797 [2024-07-22 15:05:29.949775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:52:56.797 [2024-07-22 15:05:29.949788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.797 [2024-07-22 15:05:29.949796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:52:56.797 [2024-07-22 15:05:29.949810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.797 [2024-07-22 15:05:29.949818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:52:56.797 [2024-07-22 15:05:29.949831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.797 [2024-07-22 15:05:29.949840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:52:56.797 [2024-07-22 15:05:29.949852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.797 [2024-07-22 15:05:29.949861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:52:56.797 [2024-07-22 15:05:29.949874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.797 [2024-07-22 15:05:29.949882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:52:56.797 [2024-07-22 15:05:29.949895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.797 [2024-07-22 15:05:29.949903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:52:56.797 [2024-07-22 15:05:29.949916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.797 [2024-07-22 15:05:29.949924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:52:56.797 [2024-07-22 15:05:29.949937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.797 [2024-07-22 15:05:29.949945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:52:56.797 [2024-07-22 15:05:29.949958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.797 [2024-07-22 15:05:29.949970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:52:56.797 [2024-07-22 15:05:29.949983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.797 [2024-07-22 15:05:29.949992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:52:56.797 [2024-07-22 15:05:29.950005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.797 [2024-07-22 15:05:29.950013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:52:56.797 [2024-07-22 15:05:29.950026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.797 [2024-07-22 15:05:29.950034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:52:56.797 [2024-07-22 15:05:29.950047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.797 [2024-07-22 15:05:29.950055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:52:56.797 [2024-07-22 15:05:29.950068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.797 [2024-07-22 15:05:29.950076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:52:56.797 [2024-07-22 15:05:29.950090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.797 [2024-07-22 15:05:29.950098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:52:56.797 [2024-07-22 15:05:29.950111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.797 [2024-07-22 15:05:29.950119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:52:56.797 [2024-07-22 15:05:29.950132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.797 [2024-07-22 15:05:29.950140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:52:56.797 [2024-07-22 15:05:29.950153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.797 [2024-07-22 15:05:29.950161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:52:56.797 [2024-07-22 15:05:29.950174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.797 [2024-07-22 15:05:29.950182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:52:56.797 [2024-07-22 15:05:29.950195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.797 [2024-07-22 15:05:29.950203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:52:56.797 [2024-07-22 15:05:29.950216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.797 [2024-07-22 15:05:29.950228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:52:56.797 [2024-07-22 15:05:29.950241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.797 [2024-07-22 15:05:29.950249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:52:56.797 [2024-07-22 15:05:29.950262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.797 [2024-07-22 15:05:29.950270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:52:56.797 [2024-07-22 15:05:29.950283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.797 [2024-07-22 15:05:29.950292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:52:56.797 [2024-07-22 15:05:29.950304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.797 [2024-07-22 15:05:29.950313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:52:56.797 [2024-07-22 15:05:29.950325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.797 [2024-07-22 15:05:29.950334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:52:56.797 [2024-07-22 15:05:29.950347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.797 [2024-07-22 15:05:29.950355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:52:56.797 [2024-07-22 15:05:29.950368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.797 [2024-07-22 15:05:29.950376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:56.797 [2024-07-22 15:05:29.950389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.797 [2024-07-22 15:05:29.950397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:52:56.797 [2024-07-22 15:05:29.950412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.797 [2024-07-22 15:05:29.950421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:52:56.797 [2024-07-22 15:05:29.950434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.797 [2024-07-22 15:05:29.950442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:52:56.797 [2024-07-22 15:05:29.950455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.797 [2024-07-22 15:05:29.950463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:52:56.797 [2024-07-22 15:05:29.950476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.797 [2024-07-22 15:05:29.950484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:52:56.797 [2024-07-22 15:05:29.950501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.797 [2024-07-22 15:05:29.950510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:52:56.797 [2024-07-22 15:05:29.950522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.797 [2024-07-22 15:05:29.950531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:52:56.798 [2024-07-22 15:05:29.950543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.798 [2024-07-22 15:05:29.950552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:52:56.798 [2024-07-22 15:05:29.950565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.798 [2024-07-22 15:05:29.950573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:52:56.798 [2024-07-22 15:05:29.950586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.798 [2024-07-22 15:05:29.950594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:52:56.798 [2024-07-22 15:05:29.950607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.798 [2024-07-22 15:05:29.950615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:52:56.798 [2024-07-22 15:05:29.950628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.798 [2024-07-22 15:05:29.950637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:52:56.798 [2024-07-22 15:05:29.950649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.798 [2024-07-22 15:05:29.950658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:52:56.798 [2024-07-22 15:05:29.950678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.798 [2024-07-22 15:05:29.950687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:52:56.798 [2024-07-22 15:05:29.950700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.798 [2024-07-22 15:05:29.950708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:52:56.798 [2024-07-22 15:05:29.950721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.798 [2024-07-22 15:05:29.950730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:52:56.798 [2024-07-22 15:05:29.950743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.798 [2024-07-22 15:05:29.950751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:52:56.798 [2024-07-22 15:05:29.950768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.798 [2024-07-22 15:05:29.950777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:52:56.798 [2024-07-22 15:05:29.950790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.798 [2024-07-22 15:05:29.950798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:52:56.798 [2024-07-22 15:05:29.950811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.798 [2024-07-22 15:05:29.950819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:52:56.798 [2024-07-22 15:05:29.950832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.798 [2024-07-22 15:05:29.950840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:52:56.798 [2024-07-22 15:05:29.950854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.798 [2024-07-22 15:05:29.950862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:52:56.798 [2024-07-22 15:05:29.951305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.798 [2024-07-22 15:05:29.951319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:52:56.798 [2024-07-22 15:05:29.951334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.798 [2024-07-22 15:05:29.951343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:52:56.798 [2024-07-22 15:05:29.951356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.798 [2024-07-22 15:05:29.951364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:52:56.798 [2024-07-22 15:05:29.951377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.798 [2024-07-22 15:05:29.951385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:52:56.798 [2024-07-22 15:05:29.951398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.798 [2024-07-22 15:05:29.951407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:52:56.798 [2024-07-22 15:05:29.951420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.798 [2024-07-22 15:05:29.951428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:52:56.798 [2024-07-22 15:05:29.951441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.798 [2024-07-22 15:05:29.951449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:52:56.798 [2024-07-22 15:05:29.951462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.798 [2024-07-22 15:05:29.951476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.798 [2024-07-22 15:05:29.951489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.798 [2024-07-22 15:05:29.951498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:56.798 [2024-07-22 15:05:29.951511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.798 [2024-07-22 15:05:29.951519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:56.798 [2024-07-22 15:05:29.951532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.798 [2024-07-22 15:05:29.951540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:52:56.798 [2024-07-22 15:05:29.951553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.798 [2024-07-22 15:05:29.951562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:52:56.798 [2024-07-22 15:05:29.951574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.798 [2024-07-22 15:05:29.951583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:52:56.798 [2024-07-22 15:05:29.951596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.798 [2024-07-22 15:05:29.951604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:52:56.798 [2024-07-22 15:05:29.951616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.798 [2024-07-22 15:05:29.951625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:52:56.798 [2024-07-22 15:05:29.951638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.798 [2024-07-22 15:05:29.951646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:52:56.798 [2024-07-22 15:05:29.951659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.798 [2024-07-22 15:05:29.951676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:52:56.798 [2024-07-22 15:05:29.951690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:96376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.798 [2024-07-22 15:05:29.951698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:52:56.798 [2024-07-22 15:05:29.951711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.798 [2024-07-22 15:05:29.951719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:52:56.798 [2024-07-22 15:05:29.951732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.798 [2024-07-22 15:05:29.951744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:52:56.798 [2024-07-22 15:05:29.951758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.798 [2024-07-22 15:05:29.951766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:52:56.798 [2024-07-22 15:05:29.951779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.798 [2024-07-22 15:05:29.951787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:52:56.798 [2024-07-22 15:05:29.951800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.799 [2024-07-22 15:05:29.951808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:52:56.799 [2024-07-22 15:05:29.951821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.799 [2024-07-22 15:05:29.951830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:52:56.799 [2024-07-22 15:05:29.951843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.799 [2024-07-22 15:05:29.951851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:52:56.799 [2024-07-22 15:05:29.951864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.799 [2024-07-22 15:05:29.951872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:52:56.799 [2024-07-22 15:05:29.951885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.799 [2024-07-22 15:05:29.951894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:52:56.799 [2024-07-22 15:05:29.951907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.799 [2024-07-22 15:05:29.951915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:52:56.799 [2024-07-22 15:05:29.951928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.799 [2024-07-22 15:05:29.951936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:52:56.799 [2024-07-22 15:05:29.951949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.799 [2024-07-22 15:05:29.951957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:52:56.799 [2024-07-22 15:05:29.951970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.799 [2024-07-22 15:05:29.951979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:52:56.799 [2024-07-22 15:05:29.951992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.799 [2024-07-22 15:05:29.952000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:52:56.799 [2024-07-22 15:05:29.952016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.799 [2024-07-22 15:05:29.952025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:52:56.799 [2024-07-22 15:05:29.952038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.799 [2024-07-22 15:05:29.952046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:52:56.799 [2024-07-22 15:05:29.952059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:96512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.799 [2024-07-22 15:05:29.952067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:52:56.799 [2024-07-22 15:05:29.952080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.799 [2024-07-22 15:05:29.952088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:52:56.799 [2024-07-22 15:05:29.952101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.799 [2024-07-22 15:05:29.952110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:52:56.799 [2024-07-22 15:05:29.952123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.799 [2024-07-22 15:05:29.952130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:52:56.799 [2024-07-22 15:05:29.952144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.799 [2024-07-22 15:05:29.952152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:52:56.799 [2024-07-22 15:05:29.952165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.799 [2024-07-22 15:05:29.952173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:52:56.799 [2024-07-22 15:05:29.952186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.799 [2024-07-22 15:05:29.952194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:52:56.799 [2024-07-22 15:05:29.952207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.799 [2024-07-22 15:05:29.952215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:56.799 [2024-07-22 15:05:29.952229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.799 [2024-07-22 15:05:29.952238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:52:56.799 [2024-07-22 15:05:29.952250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.799 [2024-07-22 15:05:29.952259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:52:56.799 [2024-07-22 15:05:29.952275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.799 [2024-07-22 15:05:29.952284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:52:56.799 [2024-07-22 15:05:29.952297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.799 [2024-07-22 15:05:29.952305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:52:56.799 [2024-07-22 15:05:29.952318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.799 [2024-07-22 15:05:29.952326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:52:56.799 [2024-07-22 15:05:29.952339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.799 [2024-07-22 15:05:29.952347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:52:56.799 [2024-07-22 15:05:29.952360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.799 [2024-07-22 15:05:29.952369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:52:56.799 [2024-07-22 15:05:29.952382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.799 [2024-07-22 15:05:29.952390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:52:56.799 [2024-07-22 15:05:29.952403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.799 [2024-07-22 15:05:29.952411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:52:56.799 [2024-07-22 15:05:29.952424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.799 [2024-07-22 15:05:29.952432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:52:56.799 [2024-07-22 15:05:29.952445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.799 [2024-07-22 15:05:29.952454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:52:56.799 [2024-07-22 15:05:29.952467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.799 [2024-07-22 15:05:29.952475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:52:56.799 [2024-07-22 15:05:29.952488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.799 [2024-07-22 15:05:29.952496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:52:56.799 [2024-07-22 15:05:29.952508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.799 [2024-07-22 15:05:29.952517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:52:56.799 [2024-07-22 15:05:29.952530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.799 [2024-07-22 15:05:29.952543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:52:56.799 [2024-07-22 15:05:29.952556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.799 [2024-07-22 15:05:29.952564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:52:56.799 [2024-07-22 15:05:29.952578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.799 [2024-07-22 15:05:29.952586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:52:56.799 [2024-07-22 15:05:29.952606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.800 [2024-07-22 15:05:29.952614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:52:56.800 [2024-07-22 15:05:29.952627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.800 [2024-07-22 15:05:29.952635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:52:56.800 [2024-07-22 15:05:29.952648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.800 [2024-07-22 15:05:29.952656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:52:56.800 [2024-07-22 15:05:29.952677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.800 [2024-07-22 15:05:29.952686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:52:56.800 [2024-07-22 15:05:29.952699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.800 [2024-07-22 15:05:29.952707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:52:56.800 [2024-07-22 15:05:29.952720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.800 [2024-07-22 15:05:29.952728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:52:56.800 [2024-07-22 15:05:29.952741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.800 [2024-07-22 15:05:29.952749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:52:56.800 [2024-07-22 15:05:29.952762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.800 [2024-07-22 15:05:29.952770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:52:56.800 [2024-07-22 15:05:29.952783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:96320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.800 [2024-07-22 15:05:29.952791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:52:56.800 [2024-07-22 15:05:29.952804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.800 [2024-07-22 15:05:29.952817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:52:56.800 [2024-07-22 15:05:29.952830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.800 [2024-07-22 15:05:29.952838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:52:56.800 [2024-07-22 15:05:29.952851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.800 [2024-07-22 15:05:29.952860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:52:56.800 [2024-07-22 15:05:29.953376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.800 [2024-07-22 15:05:29.953391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:52:56.800 [2024-07-22 15:05:29.953406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.800 [2024-07-22 15:05:29.953415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:52:56.800 [2024-07-22 15:05:29.953429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.800 [2024-07-22 15:05:29.953437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:56.800 [2024-07-22 15:05:29.953451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.800 [2024-07-22 15:05:29.953460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:52:56.800 [2024-07-22 15:05:29.953473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.800 [2024-07-22 15:05:29.953482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:52:56.800 [2024-07-22 15:05:29.953495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.800 [2024-07-22 15:05:29.953503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:52:56.800 [2024-07-22 15:05:29.953516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.800 [2024-07-22 15:05:29.953525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:52:56.800 [2024-07-22 15:05:29.953537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.800 [2024-07-22 15:05:29.953546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:52:56.800 [2024-07-22 15:05:29.953559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.800 [2024-07-22 15:05:29.953567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:52:56.800 [2024-07-22 15:05:29.953582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.800 [2024-07-22 15:05:29.953590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:52:56.800 [2024-07-22 15:05:29.953610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.800 [2024-07-22 15:05:29.953619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:52:56.800 [2024-07-22 15:05:29.953632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.800 [2024-07-22 15:05:29.953640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:52:56.800 [2024-07-22 15:05:29.953653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.800 [2024-07-22 15:05:29.953661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:52:56.800 [2024-07-22 15:05:29.953684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.800 [2024-07-22 15:05:29.953693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:52:56.800 [2024-07-22 15:05:29.953706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.800 [2024-07-22 15:05:29.953714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:52:56.800 [2024-07-22 15:05:29.953727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.800 [2024-07-22 15:05:29.953735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:52:56.800 [2024-07-22 15:05:29.953748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.800 [2024-07-22 15:05:29.953756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:52:56.800 [2024-07-22 15:05:29.953769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.800 [2024-07-22 15:05:29.953777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:52:56.800 [2024-07-22 15:05:29.953791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.800 [2024-07-22 15:05:29.953799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:52:56.800 [2024-07-22 15:05:29.953812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.800 [2024-07-22 15:05:29.953820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:52:56.800 [2024-07-22 15:05:29.953833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.800 [2024-07-22 15:05:29.953841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:52:56.800 [2024-07-22 15:05:29.953854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.800 [2024-07-22 15:05:29.953863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:52:56.800 [2024-07-22 15:05:29.953880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.800 [2024-07-22 15:05:29.953888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:52:56.800 [2024-07-22 15:05:29.953901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.800 [2024-07-22 15:05:29.953909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:52:56.800 [2024-07-22 15:05:29.953922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.800 [2024-07-22 15:05:29.953930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:52:56.800 [2024-07-22 15:05:29.953944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.800 [2024-07-22 15:05:29.953952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:52:56.801 [2024-07-22 15:05:29.953965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.801 [2024-07-22 15:05:29.953973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:52:56.801 [2024-07-22 15:05:29.953986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.801 [2024-07-22 15:05:29.953994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:52:56.801 [2024-07-22 15:05:29.954007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.801 [2024-07-22 15:05:29.954016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:52:56.801 [2024-07-22 15:05:29.954028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.801 [2024-07-22 15:05:29.954037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:52:56.801 [2024-07-22 15:05:29.954050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.801 [2024-07-22 15:05:29.954058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:52:56.801 [2024-07-22 15:05:29.954071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.801 [2024-07-22 15:05:29.954079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:52:56.801 [2024-07-22 15:05:29.954092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.801 [2024-07-22 15:05:29.954101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:52:56.801 [2024-07-22 15:05:29.954113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.801 [2024-07-22 15:05:29.954121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:52:56.801 [2024-07-22 15:05:29.954134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.801 [2024-07-22 15:05:29.954146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:56.801 [2024-07-22 15:05:29.954160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.801 [2024-07-22 15:05:29.954168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:52:56.801 [2024-07-22 15:05:29.954181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.801 [2024-07-22 15:05:29.954189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:52:56.801 [2024-07-22 15:05:29.954202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.801 [2024-07-22 15:05:29.954210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:52:56.801 [2024-07-22 15:05:29.954223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.801 [2024-07-22 15:05:29.954231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:52:56.801 [2024-07-22 15:05:29.954244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.801 [2024-07-22 15:05:29.954252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:52:56.801 [2024-07-22 15:05:29.954265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.801 [2024-07-22 15:05:29.954273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:52:56.801 [2024-07-22 15:05:29.954286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.801 [2024-07-22 15:05:29.954294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:52:56.801 [2024-07-22 15:05:29.954307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.801 [2024-07-22 15:05:29.954315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:52:56.801 [2024-07-22 15:05:29.954328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.801 [2024-07-22 15:05:29.954337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:52:56.801 [2024-07-22 15:05:29.954350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.801 [2024-07-22 15:05:29.954358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:52:56.801 [2024-07-22 15:05:29.954371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.801 [2024-07-22 15:05:29.954379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:52:56.801 [2024-07-22 15:05:29.954392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.801 [2024-07-22 15:05:29.954404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:52:56.801 [2024-07-22 15:05:29.954417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.801 [2024-07-22 15:05:29.954425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:52:56.801 [2024-07-22 15:05:29.954438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.801 [2024-07-22 15:05:29.954446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:52:56.801 [2024-07-22 15:05:29.954459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.801 [2024-07-22 15:05:29.954467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:52:56.801 [2024-07-22 15:05:29.954480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.801 [2024-07-22 15:05:29.954489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:52:56.801 [2024-07-22 15:05:29.954501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.801 [2024-07-22 15:05:29.954509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:52:56.801 [2024-07-22 15:05:29.954522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.801 [2024-07-22 15:05:29.954531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:52:56.801 [2024-07-22 15:05:29.954544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.801 [2024-07-22 15:05:29.954552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:52:56.801 [2024-07-22 15:05:29.954565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.801 [2024-07-22 15:05:29.954573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:52:56.801 [2024-07-22 15:05:29.954586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.801 [2024-07-22 15:05:29.954594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:52:56.801 [2024-07-22 15:05:29.955044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.801 [2024-07-22 15:05:29.955060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:52:56.801 [2024-07-22 15:05:29.955075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.801 [2024-07-22 15:05:29.955083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:52:56.801 [2024-07-22 15:05:29.955097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.801 [2024-07-22 15:05:29.955105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:52:56.802 [2024-07-22 15:05:29.955124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.802 [2024-07-22 15:05:29.955133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:52:56.802 [2024-07-22 15:05:29.955146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.802 [2024-07-22 15:05:29.955154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:52:56.802 [2024-07-22 15:05:29.955167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.802 [2024-07-22 15:05:29.955175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:52:56.802 [2024-07-22 15:05:29.955188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.802 [2024-07-22 15:05:29.955196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:52:56.802 [2024-07-22 15:05:29.955209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.802 [2024-07-22 15:05:29.955217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:52:56.802 [2024-07-22 15:05:29.955231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.802 [2024-07-22 15:05:29.955239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.802 [2024-07-22 15:05:29.955252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.802 [2024-07-22 15:05:29.955261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:56.802 [2024-07-22 15:05:29.955273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.802 [2024-07-22 15:05:29.955282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:56.802 [2024-07-22 15:05:29.955295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.802 [2024-07-22 15:05:29.955304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:52:56.802 [2024-07-22 15:05:29.955316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.802 [2024-07-22 15:05:29.955325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:52:56.802 [2024-07-22 15:05:29.955337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.802 [2024-07-22 15:05:29.955346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:52:56.802 [2024-07-22 15:05:29.955358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.802 [2024-07-22 15:05:29.955367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:52:56.802 [2024-07-22 15:05:29.955383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.802 [2024-07-22 15:05:29.955392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:52:56.802 [2024-07-22 15:05:29.955405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.802 [2024-07-22 15:05:29.955413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:52:56.802 [2024-07-22 15:05:29.955426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.802 [2024-07-22 15:05:29.955435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:52:56.802 [2024-07-22 15:05:29.955448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:96376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.802 [2024-07-22 15:05:29.955456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:52:56.802 [2024-07-22 15:05:29.955469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.802 [2024-07-22 15:05:29.955477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:52:56.802 [2024-07-22 15:05:29.955490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.802 [2024-07-22 15:05:29.955499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:52:56.802 [2024-07-22 15:05:29.955512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.802 [2024-07-22 15:05:29.955520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:52:56.802 [2024-07-22 15:05:29.955533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.802 [2024-07-22 15:05:29.955541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:52:56.802 [2024-07-22 15:05:29.955554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.802 [2024-07-22 15:05:29.955563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:52:56.802 [2024-07-22 15:05:29.955576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.802 [2024-07-22 15:05:29.955584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:52:56.802 [2024-07-22 15:05:29.955597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.802 [2024-07-22 15:05:29.955606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:52:56.802 [2024-07-22 15:05:29.955618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.802 [2024-07-22 15:05:29.955627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:52:56.802 [2024-07-22 15:05:29.955639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.802 [2024-07-22 15:05:29.955651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:52:56.802 [2024-07-22 15:05:29.955664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:96456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.802 [2024-07-22 15:05:29.955682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:52:56.802 [2024-07-22 15:05:29.955694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.802 [2024-07-22 15:05:29.955702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:52:56.802 [2024-07-22 15:05:29.955715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:96472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.802 [2024-07-22 15:05:29.955723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:52:56.802 [2024-07-22 15:05:29.955736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:96480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.802 [2024-07-22 15:05:29.955744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:52:56.802 [2024-07-22 15:05:29.955757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.802 [2024-07-22 15:05:29.955765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:52:56.802 [2024-07-22 15:05:29.955778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.802 [2024-07-22 15:05:29.955786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:52:56.802 [2024-07-22 15:05:29.955799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.802 [2024-07-22 15:05:29.955807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:52:56.802 [2024-07-22 15:05:29.955820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.802 [2024-07-22 15:05:29.955827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:52:56.802 [2024-07-22 15:05:29.955840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.802 [2024-07-22 15:05:29.955848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:52:56.802 [2024-07-22 15:05:29.955861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.802 [2024-07-22 15:05:29.955869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:52:56.802 [2024-07-22 15:05:29.955882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.802 [2024-07-22 15:05:29.955890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:52:56.802 [2024-07-22 15:05:29.955903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.802 [2024-07-22 15:05:29.955915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:52:56.802 [2024-07-22 15:05:29.955928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.802 [2024-07-22 15:05:29.955936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:52:56.802 [2024-07-22 15:05:29.955949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.802 [2024-07-22 15:05:29.955957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:52:56.803 [2024-07-22 15:05:29.955970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.803 [2024-07-22 15:05:29.955978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:56.803 [2024-07-22 15:05:29.955990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.803 [2024-07-22 15:05:29.955998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:52:56.803 [2024-07-22 15:05:29.956011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.803 [2024-07-22 15:05:29.956019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:52:56.803 [2024-07-22 15:05:29.960648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.803 [2024-07-22 15:05:29.960660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:52:56.803 [2024-07-22 15:05:29.960682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.803 [2024-07-22 15:05:29.960691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:52:56.803 [2024-07-22 15:05:29.960704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.803 [2024-07-22 15:05:29.960713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:52:56.803 [2024-07-22 15:05:29.960726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.803 [2024-07-22 15:05:29.960734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:52:56.803 [2024-07-22 15:05:29.960747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.803 [2024-07-22 15:05:29.960755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:52:56.803 [2024-07-22 15:05:29.960768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.803 [2024-07-22 15:05:29.960777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:52:56.803 [2024-07-22 15:05:29.960790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.803 [2024-07-22 15:05:29.960798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:52:56.803 [2024-07-22 15:05:29.960816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.803 [2024-07-22 15:05:29.960825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:52:56.803 [2024-07-22 15:05:29.960838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.803 [2024-07-22 15:05:29.960846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:52:56.803 [2024-07-22 15:05:29.960859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.803 [2024-07-22 15:05:29.960867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:52:56.803 [2024-07-22 15:05:29.960880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.803 [2024-07-22 15:05:29.960888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:52:56.803 [2024-07-22 15:05:29.960901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.803 [2024-07-22 15:05:29.960909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:52:56.803 [2024-07-22 15:05:29.960922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.803 [2024-07-22 15:05:29.960930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:52:56.803 [2024-07-22 15:05:29.960943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.803 [2024-07-22 15:05:29.960952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:52:56.803 [2024-07-22 15:05:29.960965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.803 [2024-07-22 15:05:29.960973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:52:56.803 [2024-07-22 15:05:29.960986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.803 [2024-07-22 15:05:29.960994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:52:56.803 [2024-07-22 15:05:29.961007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.803 [2024-07-22 15:05:29.961016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:52:56.803 [2024-07-22 15:05:29.961029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.803 [2024-07-22 15:05:29.961037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:52:56.803 [2024-07-22 15:05:29.961050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.803 [2024-07-22 15:05:29.961058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:52:56.803 [2024-07-22 15:05:29.961075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.803 [2024-07-22 15:05:29.961083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:52:56.803 [2024-07-22 15:05:29.961096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.803 [2024-07-22 15:05:29.961104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:52:56.803 [2024-07-22 15:05:29.961117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.803 [2024-07-22 15:05:29.961126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:52:56.803 [2024-07-22 15:05:29.961139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.803 [2024-07-22 15:05:29.961147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:52:56.803 [2024-07-22 15:05:29.961160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.803 [2024-07-22 15:05:29.961169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:52:56.803 [2024-07-22 15:05:29.961182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.803 [2024-07-22 15:05:29.961190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:52:56.803 [2024-07-22 15:05:29.961203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.803 [2024-07-22 15:05:29.961211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:52:56.803 [2024-07-22 15:05:29.961867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.803 [2024-07-22 15:05:29.961885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:52:56.803 [2024-07-22 15:05:29.961912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.803 [2024-07-22 15:05:29.961922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:52:56.803 [2024-07-22 15:05:29.961936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.803 [2024-07-22 15:05:29.961944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:52:56.803 [2024-07-22 15:05:29.961958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.803 [2024-07-22 15:05:29.961967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:56.803 [2024-07-22 15:05:29.961980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.803 [2024-07-22 15:05:29.961989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:52:56.803 [2024-07-22 15:05:29.962002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.803 [2024-07-22 15:05:29.962017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:52:56.803 [2024-07-22 15:05:29.962030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.803 [2024-07-22 15:05:29.962039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:52:56.803 [2024-07-22 15:05:29.962052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.803 [2024-07-22 15:05:29.962060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:52:56.803 [2024-07-22 15:05:29.962073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.803 [2024-07-22 15:05:29.962081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:52:56.803 [2024-07-22 15:05:29.962095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.804 [2024-07-22 15:05:29.962103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:52:56.804 [2024-07-22 15:05:29.962116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.804 [2024-07-22 15:05:29.962124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:52:56.804 [2024-07-22 15:05:29.962137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.804 [2024-07-22 15:05:29.962145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:52:56.804 [2024-07-22 15:05:29.962158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.804 [2024-07-22 15:05:29.962167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:52:56.804 [2024-07-22 15:05:29.962179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.804 [2024-07-22 15:05:29.962188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:52:56.804 [2024-07-22 15:05:29.962201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.804 [2024-07-22 15:05:29.962209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:52:56.804 [2024-07-22 15:05:29.962222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.804 [2024-07-22 15:05:29.962231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:52:56.804 [2024-07-22 15:05:29.962244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.804 [2024-07-22 15:05:29.962252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:52:56.804 [2024-07-22 15:05:29.962265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.804 [2024-07-22 15:05:29.962277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:52:56.804 [2024-07-22 15:05:29.962290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.804 [2024-07-22 15:05:29.962298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:52:56.804 [2024-07-22 15:05:29.962311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.804 [2024-07-22 15:05:29.962320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:52:56.804 [2024-07-22 15:05:29.962333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.804 [2024-07-22 15:05:29.962341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:52:56.804 [2024-07-22 15:05:29.962354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.804 [2024-07-22 15:05:29.962362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:52:56.804 [2024-07-22 15:05:29.962375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.804 [2024-07-22 15:05:29.962383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:52:56.804 [2024-07-22 15:05:29.962396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.804 [2024-07-22 15:05:29.962405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:52:56.804 [2024-07-22 15:05:29.962418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.804 [2024-07-22 15:05:29.962426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:52:56.804 [2024-07-22 15:05:29.962439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.804 [2024-07-22 15:05:29.962447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:52:56.804 [2024-07-22 15:05:29.962460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.804 [2024-07-22 15:05:29.962468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:52:56.804 [2024-07-22 15:05:29.962481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.804 [2024-07-22 15:05:29.962490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:52:56.804 [2024-07-22 15:05:29.962503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.804 [2024-07-22 15:05:29.962511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:52:56.804 [2024-07-22 15:05:29.962524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.804 [2024-07-22 15:05:29.962532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:52:56.804 [2024-07-22 15:05:29.962550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.804 [2024-07-22 15:05:29.962558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:52:56.804 [2024-07-22 15:05:29.962571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.804 [2024-07-22 15:05:29.962580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:52:56.805 [2024-07-22 15:05:29.962592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.805 [2024-07-22 15:05:29.962601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:52:56.805 [2024-07-22 15:05:29.962613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.805 [2024-07-22 15:05:29.962622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:52:56.805 [2024-07-22 15:05:29.962635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.805 [2024-07-22 15:05:29.962643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:52:56.805 [2024-07-22 15:05:29.962657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.805 [2024-07-22 15:05:29.962665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:56.805 [2024-07-22 15:05:29.962688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.805 [2024-07-22 15:05:29.962697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:52:56.805 [2024-07-22 15:05:29.962710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.805 [2024-07-22 15:05:29.962718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:52:56.805 [2024-07-22 15:05:29.962731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.805 [2024-07-22 15:05:29.962739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:52:56.805 [2024-07-22 15:05:29.962752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.805 [2024-07-22 15:05:29.962760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:52:56.805 [2024-07-22 15:05:29.962773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.805 [2024-07-22 15:05:29.962781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:52:56.805 [2024-07-22 15:05:29.962794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.805 [2024-07-22 15:05:29.962803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:52:56.805 [2024-07-22 15:05:29.962822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.805 [2024-07-22 15:05:29.962830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:52:56.805 [2024-07-22 15:05:29.962843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.805 [2024-07-22 15:05:29.962851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:52:56.805 [2024-07-22 15:05:29.962864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.805 [2024-07-22 15:05:29.962872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:52:56.805 [2024-07-22 15:05:29.962886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.805 [2024-07-22 15:05:29.962894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:52:56.805 [2024-07-22 15:05:29.962907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.805 [2024-07-22 15:05:29.962915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:52:56.805 [2024-07-22 15:05:29.962928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.805 [2024-07-22 15:05:29.962937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:52:56.805 [2024-07-22 15:05:29.962949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.805 [2024-07-22 15:05:29.962957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:52:56.805 [2024-07-22 15:05:29.962970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.805 [2024-07-22 15:05:29.962979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:52:56.805 [2024-07-22 15:05:29.962992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.805 [2024-07-22 15:05:29.963000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:52:56.805 [2024-07-22 15:05:29.963013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.805 [2024-07-22 15:05:29.963021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:52:56.805 [2024-07-22 15:05:29.963035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.805 [2024-07-22 15:05:29.963043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:52:56.805 [2024-07-22 15:05:29.963056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.805 [2024-07-22 15:05:29.963065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:52:56.805 [2024-07-22 15:05:29.963078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.805 [2024-07-22 15:05:29.963097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:52:56.805 [2024-07-22 15:05:29.963110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.805 [2024-07-22 15:05:29.963119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:52:56.805 [2024-07-22 15:05:29.963715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.805 [2024-07-22 15:05:29.963733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:52:56.805 [2024-07-22 15:05:36.285564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.805 [2024-07-22 15:05:36.285623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:52:56.805 [2024-07-22 15:05:36.285647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.805 [2024-07-22 15:05:36.285657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:52:56.805 [2024-07-22 15:05:36.285679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.805 [2024-07-22 15:05:36.285689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:52:56.805 [2024-07-22 15:05:36.285703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.805 [2024-07-22 15:05:36.285711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:52:56.805 [2024-07-22 15:05:36.285725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.805 [2024-07-22 15:05:36.285734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:52:56.805 [2024-07-22 15:05:36.285748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.805 [2024-07-22 15:05:36.285756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:52:56.805 [2024-07-22 15:05:36.285770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:130896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.805 [2024-07-22 15:05:36.285779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:52:56.805 [2024-07-22 15:05:36.285792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:130904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.805 [2024-07-22 15:05:36.285801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:52:56.805 [2024-07-22 15:05:36.285814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:130912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.805 [2024-07-22 15:05:36.285823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:52:56.805 [2024-07-22 15:05:36.285836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:130920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.805 [2024-07-22 15:05:36.285866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:52:56.805 [2024-07-22 15:05:36.285880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:130928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.805 [2024-07-22 15:05:36.285888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:52:56.805 [2024-07-22 15:05:36.285902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:130936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.805 [2024-07-22 15:05:36.285910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:52:56.805 [2024-07-22 15:05:36.285924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:130944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.805 [2024-07-22 15:05:36.285933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:52:56.805 [2024-07-22 15:05:36.285947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:130952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.806 [2024-07-22 15:05:36.285955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:52:56.806 [2024-07-22 15:05:36.285969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.806 [2024-07-22 15:05:36.285977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:52:56.806 [2024-07-22 15:05:36.285991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:130960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.806 [2024-07-22 15:05:36.286000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:52:56.806 [2024-07-22 15:05:36.286014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:130968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.806 [2024-07-22 15:05:36.286023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:52:56.806 [2024-07-22 15:05:36.286184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:130976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.806 [2024-07-22 15:05:36.286197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:52:56.806 [2024-07-22 15:05:36.286212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:130984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.806 [2024-07-22 15:05:36.286221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:52:56.806 [2024-07-22 15:05:36.286234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:130992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.806 [2024-07-22 15:05:36.286243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:52:56.806 [2024-07-22 15:05:36.286257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:131000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.806 [2024-07-22 15:05:36.286265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:52:56.806 [2024-07-22 15:05:36.286279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:131008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.806 [2024-07-22 15:05:36.286287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:52:56.806 [2024-07-22 15:05:36.286309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:131016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.806 [2024-07-22 15:05:36.286318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:52:56.806 [2024-07-22 15:05:36.286331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:131024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.806 [2024-07-22 15:05:36.286340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:52:56.806 [2024-07-22 15:05:36.286354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:131032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.806 [2024-07-22 15:05:36.286362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:52:56.806 [2024-07-22 15:05:36.286376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:131040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.806 [2024-07-22 15:05:36.286385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:52:56.806 [2024-07-22 15:05:36.286399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:131048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.806 [2024-07-22 15:05:36.286407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:52:56.806 [2024-07-22 15:05:36.286421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:131056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.806 [2024-07-22 15:05:36.286430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:52:56.806 [2024-07-22 15:05:36.286443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:131064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.806 [2024-07-22 15:05:36.286452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:52:56.806 [2024-07-22 15:05:36.286466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:0 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.806 [2024-07-22 15:05:36.286474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:52:56.806 [2024-07-22 15:05:36.286488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.806 [2024-07-22 15:05:36.286497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:52:56.806 [2024-07-22 15:05:36.286510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.806 [2024-07-22 15:05:36.286520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:56.806 [2024-07-22 15:05:36.286533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.806 [2024-07-22 15:05:36.286542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:52:56.806 [2024-07-22 15:05:36.286556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:32 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.806 [2024-07-22 15:05:36.286565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:52:56.806 [2024-07-22 15:05:36.286582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:40 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.806 [2024-07-22 15:05:36.286592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:52:56.806 [2024-07-22 15:05:36.286605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:48 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.806 [2024-07-22 15:05:36.286613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:52:56.806 [2024-07-22 15:05:36.286628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.806 [2024-07-22 15:05:36.286636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:52:56.806 [2024-07-22 15:05:36.286973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.806 [2024-07-22 15:05:36.286987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:52:56.806 [2024-07-22 15:05:36.287002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.806 [2024-07-22 15:05:36.287011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:52:56.806 [2024-07-22 15:05:36.287025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.806 [2024-07-22 15:05:36.287034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:52:56.806 [2024-07-22 15:05:36.287047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.806 [2024-07-22 15:05:36.287058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:52:56.806 [2024-07-22 15:05:36.287072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.806 [2024-07-22 15:05:36.287081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:52:56.806 [2024-07-22 15:05:36.287095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:56 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.806 [2024-07-22 15:05:36.287104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:52:56.806 [2024-07-22 15:05:36.287117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:64 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.806 [2024-07-22 15:05:36.287126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:52:56.806 [2024-07-22 15:05:36.287140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:72 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.806 [2024-07-22 15:05:36.287149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:52:56.806 [2024-07-22 15:05:36.287163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:80 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.806 [2024-07-22 15:05:36.287171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:52:56.806 [2024-07-22 15:05:36.287185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:88 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.806 [2024-07-22 15:05:36.287200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:52:56.806 [2024-07-22 15:05:36.287213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.806 [2024-07-22 15:05:36.287222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:52:56.806 [2024-07-22 15:05:36.287237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.806 [2024-07-22 15:05:36.287246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:52:56.806 [2024-07-22 15:05:36.287260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.806 [2024-07-22 15:05:36.287269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:52:56.806 [2024-07-22 15:05:36.287283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.806 [2024-07-22 15:05:36.287292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:52:56.806 [2024-07-22 15:05:36.287305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.806 [2024-07-22 15:05:36.287314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:52:56.806 [2024-07-22 15:05:36.287328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.807 [2024-07-22 15:05:36.287337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:52:56.807 [2024-07-22 15:05:36.287351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.807 [2024-07-22 15:05:36.287359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:52:56.807 [2024-07-22 15:05:36.287513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.807 [2024-07-22 15:05:36.287526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:52:56.807 [2024-07-22 15:05:36.287541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.807 [2024-07-22 15:05:36.287550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:52:56.807 [2024-07-22 15:05:36.287564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.807 [2024-07-22 15:05:36.287573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:52:56.807 [2024-07-22 15:05:36.287587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.807 [2024-07-22 15:05:36.287596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:52:56.807 [2024-07-22 15:05:36.287609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.807 [2024-07-22 15:05:36.287623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:52:56.807 [2024-07-22 15:05:36.287637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.807 [2024-07-22 15:05:36.287646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:52:56.807 [2024-07-22 15:05:36.287660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.807 [2024-07-22 15:05:36.287678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:52:56.807 [2024-07-22 15:05:36.287692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.807 [2024-07-22 15:05:36.287701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:52:56.807 [2024-07-22 15:05:36.287715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.807 [2024-07-22 15:05:36.287723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:52:56.807 [2024-07-22 15:05:36.287737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.807 [2024-07-22 15:05:36.287746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:56.807 [2024-07-22 15:05:36.287760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.807 [2024-07-22 15:05:36.287770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:52:56.807 [2024-07-22 15:05:36.287784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.807 [2024-07-22 15:05:36.287793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:52:56.807 [2024-07-22 15:05:36.287806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.807 [2024-07-22 15:05:36.287815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:52:56.807 [2024-07-22 15:05:36.287830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.807 [2024-07-22 15:05:36.287838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:52:56.807 [2024-07-22 15:05:36.287852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.807 [2024-07-22 15:05:36.287861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:52:56.807 [2024-07-22 15:05:36.287874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.807 [2024-07-22 15:05:36.287883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:52:56.807 [2024-07-22 15:05:36.287897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.807 [2024-07-22 15:05:36.287906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:52:56.807 [2024-07-22 15:05:36.287925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.807 [2024-07-22 15:05:36.287934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:52:56.807 [2024-07-22 15:05:36.287948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.807 [2024-07-22 15:05:36.287956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:52:56.807 [2024-07-22 15:05:36.287970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.807 [2024-07-22 15:05:36.287978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:52:56.807 [2024-07-22 15:05:36.287992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.807 [2024-07-22 15:05:36.288001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:52:56.807 [2024-07-22 15:05:36.288014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.807 [2024-07-22 15:05:36.288023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:52:56.807 [2024-07-22 15:05:36.288037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.807 [2024-07-22 15:05:36.288045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:52:56.807 [2024-07-22 15:05:36.288059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.807 [2024-07-22 15:05:36.288068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:52:56.807 [2024-07-22 15:05:36.288082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.807 [2024-07-22 15:05:36.288091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:52:56.807 [2024-07-22 15:05:36.288105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.807 [2024-07-22 15:05:36.288113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:52:56.807 [2024-07-22 15:05:36.288127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.808 [2024-07-22 15:05:36.288136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:52:56.808 [2024-07-22 15:05:36.288150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.808 [2024-07-22 15:05:36.288158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:52:56.808 [2024-07-22 15:05:36.288172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.808 [2024-07-22 15:05:36.288181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:52:56.808 [2024-07-22 15:05:36.288199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.808 [2024-07-22 15:05:36.288209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:52:56.808 [2024-07-22 15:05:36.288222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.808 [2024-07-22 15:05:36.288231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:52:56.808 [2024-07-22 15:05:36.288244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.808 [2024-07-22 15:05:36.288253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:52:56.808 [2024-07-22 15:05:36.288267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.808 [2024-07-22 15:05:36.288276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:52:56.808 [2024-07-22 15:05:36.288289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.808 [2024-07-22 15:05:36.288298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:52:56.808 [2024-07-22 15:05:36.288311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.808 [2024-07-22 15:05:36.288320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:52:56.808 [2024-07-22 15:05:36.288334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.808 [2024-07-22 15:05:36.288343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:52:56.808 [2024-07-22 15:05:36.288361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.808 [2024-07-22 15:05:36.288370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:52:56.808 [2024-07-22 15:05:36.288384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.808 [2024-07-22 15:05:36.288393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:52:56.808 [2024-07-22 15:05:36.288406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.808 [2024-07-22 15:05:36.288415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:52:56.808 [2024-07-22 15:05:36.288429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.808 [2024-07-22 15:05:36.288438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.808 [2024-07-22 15:05:36.288451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.808 [2024-07-22 15:05:36.288460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:56.808 [2024-07-22 15:05:36.288474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.808 [2024-07-22 15:05:36.288487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:56.808 [2024-07-22 15:05:36.288501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.808 [2024-07-22 15:05:36.288510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:52:56.808 [2024-07-22 15:05:36.288524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.808 [2024-07-22 15:05:36.288532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:52:56.808 [2024-07-22 15:05:36.288546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.808 [2024-07-22 15:05:36.288555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:52:56.808 [2024-07-22 15:05:36.288569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.808 [2024-07-22 15:05:36.288581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:52:56.808 [2024-07-22 15:05:36.288602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.808 [2024-07-22 15:05:36.288612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:52:56.808 [2024-07-22 15:05:36.288625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.808 [2024-07-22 15:05:36.288634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:52:56.808 [2024-07-22 15:05:36.288648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.808 [2024-07-22 15:05:36.288656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:52:56.808 [2024-07-22 15:05:36.288678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.808 [2024-07-22 15:05:36.288687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:52:56.808 [2024-07-22 15:05:36.288701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.808 [2024-07-22 15:05:36.288709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:52:56.808 [2024-07-22 15:05:36.288723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.808 [2024-07-22 15:05:36.288732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:52:56.808 [2024-07-22 15:05:36.288747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.808 [2024-07-22 15:05:36.288756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:52:56.808 [2024-07-22 15:05:36.288770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.808 [2024-07-22 15:05:36.288779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:52:56.808 [2024-07-22 15:05:36.288797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.808 [2024-07-22 15:05:36.288806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:52:56.808 [2024-07-22 15:05:36.288820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.808 [2024-07-22 15:05:36.288828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:52:56.808 [2024-07-22 15:05:36.288842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.808 [2024-07-22 15:05:36.288850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:52:56.808 [2024-07-22 15:05:36.288864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.808 [2024-07-22 15:05:36.288873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:52:56.808 [2024-07-22 15:05:36.288887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.808 [2024-07-22 15:05:36.288895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:52:56.808 [2024-07-22 15:05:36.288909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.808 [2024-07-22 15:05:36.288918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:52:56.808 [2024-07-22 15:05:36.288932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.808 [2024-07-22 15:05:36.288941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:52:56.808 [2024-07-22 15:05:36.289367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.808 [2024-07-22 15:05:36.289381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:52:56.808 [2024-07-22 15:05:36.289396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.808 [2024-07-22 15:05:36.289406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:52:56.808 [2024-07-22 15:05:36.289420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.808 [2024-07-22 15:05:36.289428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:52:56.808 [2024-07-22 15:05:36.289442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.808 [2024-07-22 15:05:36.289451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:52:56.809 [2024-07-22 15:05:36.289465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.809 [2024-07-22 15:05:36.289473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:52:56.809 [2024-07-22 15:05:36.289493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.809 [2024-07-22 15:05:36.289502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:52:56.809 [2024-07-22 15:05:36.289516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.809 [2024-07-22 15:05:36.289525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:52:56.809 [2024-07-22 15:05:36.289540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.809 [2024-07-22 15:05:36.289548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:52:56.809 [2024-07-22 15:05:36.289562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.809 [2024-07-22 15:05:36.289571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:52:56.809 [2024-07-22 15:05:36.289585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.809 [2024-07-22 15:05:36.289593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:52:56.809 [2024-07-22 15:05:36.289608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.809 [2024-07-22 15:05:36.289616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:52:56.809 [2024-07-22 15:05:36.289630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.809 [2024-07-22 15:05:36.289639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:52:56.809 [2024-07-22 15:05:36.289652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.809 [2024-07-22 15:05:36.289661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:56.809 [2024-07-22 15:05:36.289683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.809 [2024-07-22 15:05:36.289692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:52:56.809 [2024-07-22 15:05:36.289709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.809 [2024-07-22 15:05:36.289718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:52:56.809 [2024-07-22 15:05:36.289731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.809 [2024-07-22 15:05:36.289740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:52:56.809 [2024-07-22 15:05:36.289754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.809 [2024-07-22 15:05:36.289762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:52:56.809 [2024-07-22 15:05:36.289776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.809 [2024-07-22 15:05:36.289789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:52:56.809 [2024-07-22 15:05:36.289803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.809 [2024-07-22 15:05:36.289811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:52:56.809 [2024-07-22 15:05:36.289825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:130896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.809 [2024-07-22 15:05:36.289833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:52:56.809 [2024-07-22 15:05:36.289847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:130904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.809 [2024-07-22 15:05:36.289855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:52:56.809 [2024-07-22 15:05:36.289869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:130912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.809 [2024-07-22 15:05:36.289878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:52:56.809 [2024-07-22 15:05:36.289891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:130920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.809 [2024-07-22 15:05:36.289900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:52:56.809 [2024-07-22 15:05:36.289914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:130928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.809 [2024-07-22 15:05:36.289923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:52:56.809 [2024-07-22 15:05:36.289936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:130936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.809 [2024-07-22 15:05:36.289945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:52:56.809 [2024-07-22 15:05:36.289959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:130944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.809 [2024-07-22 15:05:36.289968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:52:56.809 [2024-07-22 15:05:36.289982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:130952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.809 [2024-07-22 15:05:36.289990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:52:56.809 [2024-07-22 15:05:36.290004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.809 [2024-07-22 15:05:36.290012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:52:56.809 [2024-07-22 15:05:36.290026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:130960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.809 [2024-07-22 15:05:36.290035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:52:56.809 [2024-07-22 15:05:36.290049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:130968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.809 [2024-07-22 15:05:36.290059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:52:56.809 [2024-07-22 15:05:36.290077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:130976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.809 [2024-07-22 15:05:36.290086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:52:56.809 [2024-07-22 15:05:36.290100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:130984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.809 [2024-07-22 15:05:36.290109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:52:56.809 [2024-07-22 15:05:36.290123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:130992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.810 [2024-07-22 15:05:36.290131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:52:56.810 [2024-07-22 15:05:36.290145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:131000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.810 [2024-07-22 15:05:36.290153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:52:56.810 [2024-07-22 15:05:36.290167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:131008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.810 [2024-07-22 15:05:36.290176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:52:56.810 [2024-07-22 15:05:36.290189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:131016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.810 [2024-07-22 15:05:36.290198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:52:56.810 [2024-07-22 15:05:36.290212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:131024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.810 [2024-07-22 15:05:36.290220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:52:56.810 [2024-07-22 15:05:36.290234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:131032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.810 [2024-07-22 15:05:36.290242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:52:56.810 [2024-07-22 15:05:36.290257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:131040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.810 [2024-07-22 15:05:36.290265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:52:56.810 [2024-07-22 15:05:36.290279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:131048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.810 [2024-07-22 15:05:36.290287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:52:56.810 [2024-07-22 15:05:36.290301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:131056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.810 [2024-07-22 15:05:36.290310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:52:56.810 [2024-07-22 15:05:36.290323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:131064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.810 [2024-07-22 15:05:36.290332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:52:56.810 [2024-07-22 15:05:36.290349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:0 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.810 [2024-07-22 15:05:36.290358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:52:56.810 [2024-07-22 15:05:36.290372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.810 [2024-07-22 15:05:36.290381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:52:56.810 [2024-07-22 15:05:36.290394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:16 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.810 [2024-07-22 15:05:36.290403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:56.810 [2024-07-22 15:05:36.290417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.810 [2024-07-22 15:05:36.290425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:52:56.810 [2024-07-22 15:05:36.290441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:32 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.810 [2024-07-22 15:05:36.290449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:52:56.810 [2024-07-22 15:05:36.290463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.810 [2024-07-22 15:05:36.290471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:52:56.810 [2024-07-22 15:05:36.290486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:48 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.810 [2024-07-22 15:05:36.290495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:52:56.810 [2024-07-22 15:05:36.290508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.810 [2024-07-22 15:05:36.290517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:52:56.810 [2024-07-22 15:05:36.290530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.810 [2024-07-22 15:05:36.290539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:52:56.810 [2024-07-22 15:05:36.290553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.810 [2024-07-22 15:05:36.290562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:52:56.810 [2024-07-22 15:05:36.303250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.810 [2024-07-22 15:05:36.303285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:52:56.810 [2024-07-22 15:05:36.303305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.810 [2024-07-22 15:05:36.303317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:52:56.810 [2024-07-22 15:05:36.303350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.810 [2024-07-22 15:05:36.303362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:52:56.810 [2024-07-22 15:05:36.303380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:56 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.810 [2024-07-22 15:05:36.303392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:52:56.810 [2024-07-22 15:05:36.303409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:64 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.810 [2024-07-22 15:05:36.303420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:52:56.810 [2024-07-22 15:05:36.303438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:72 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.810 [2024-07-22 15:05:36.303449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:52:56.810 [2024-07-22 15:05:36.303466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:80 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.810 [2024-07-22 15:05:36.303477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:52:56.810 [2024-07-22 15:05:36.303495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:88 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.810 [2024-07-22 15:05:36.303505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:52:56.810 [2024-07-22 15:05:36.303523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.810 [2024-07-22 15:05:36.303534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:52:56.810 [2024-07-22 15:05:36.303551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.810 [2024-07-22 15:05:36.303562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:52:56.810 [2024-07-22 15:05:36.303581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.810 [2024-07-22 15:05:36.303592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:52:56.810 [2024-07-22 15:05:36.303610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.810 [2024-07-22 15:05:36.303620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:52:56.810 [2024-07-22 15:05:36.303638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.810 [2024-07-22 15:05:36.303649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:52:56.810 [2024-07-22 15:05:36.303680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.810 [2024-07-22 15:05:36.303693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:52:56.810 [2024-07-22 15:05:36.304403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.810 [2024-07-22 15:05:36.304432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:52:56.810 [2024-07-22 15:05:36.304453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.810 [2024-07-22 15:05:36.304465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:52:56.810 [2024-07-22 15:05:36.304482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.810 [2024-07-22 15:05:36.304493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:52:56.810 [2024-07-22 15:05:36.304511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.810 [2024-07-22 15:05:36.304521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:52:56.810 [2024-07-22 15:05:36.304539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.810 [2024-07-22 15:05:36.304550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:52:56.811 [2024-07-22 15:05:36.304567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.811 [2024-07-22 15:05:36.304577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:52:56.811 [2024-07-22 15:05:36.304607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.811 [2024-07-22 15:05:36.304619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:52:56.811 [2024-07-22 15:05:36.304637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.811 [2024-07-22 15:05:36.304648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:52:56.811 [2024-07-22 15:05:36.304678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.811 [2024-07-22 15:05:36.304690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:52:56.811 [2024-07-22 15:05:36.304707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.811 [2024-07-22 15:05:36.304718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:52:56.811 [2024-07-22 15:05:36.304736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.811 [2024-07-22 15:05:36.304747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:56.811 [2024-07-22 15:05:36.304764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.811 [2024-07-22 15:05:36.304775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:52:56.811 [2024-07-22 15:05:36.304793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.811 [2024-07-22 15:05:36.304811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:52:56.811 [2024-07-22 15:05:36.304828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.811 [2024-07-22 15:05:36.304839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:52:56.811 [2024-07-22 15:05:36.304856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.811 [2024-07-22 15:05:36.304867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:52:56.811 [2024-07-22 15:05:36.304884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.811 [2024-07-22 15:05:36.304895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:52:56.811 [2024-07-22 15:05:36.304912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.811 [2024-07-22 15:05:36.304923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:52:56.811 [2024-07-22 15:05:36.304940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.811 [2024-07-22 15:05:36.304951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:52:56.811 [2024-07-22 15:05:36.304968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.811 [2024-07-22 15:05:36.304979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:52:56.811 [2024-07-22 15:05:36.304996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.811 [2024-07-22 15:05:36.305007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:52:56.811 [2024-07-22 15:05:36.305024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.811 [2024-07-22 15:05:36.305035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:52:56.811 [2024-07-22 15:05:36.305052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.811 [2024-07-22 15:05:36.305063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:52:56.811 [2024-07-22 15:05:36.305080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.811 [2024-07-22 15:05:36.305091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:52:56.811 [2024-07-22 15:05:36.305108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.811 [2024-07-22 15:05:36.305119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:52:56.811 [2024-07-22 15:05:36.305136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.811 [2024-07-22 15:05:36.305147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:52:56.811 [2024-07-22 15:05:36.305169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.811 [2024-07-22 15:05:36.305180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:52:56.811 [2024-07-22 15:05:36.305198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.811 [2024-07-22 15:05:36.305209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:52:56.811 [2024-07-22 15:05:36.305227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.811 [2024-07-22 15:05:36.305238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:52:56.811 [2024-07-22 15:05:36.305255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.811 [2024-07-22 15:05:36.305266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:52:56.811 [2024-07-22 15:05:36.305283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.811 [2024-07-22 15:05:36.305294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:52:56.811 [2024-07-22 15:05:36.305311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.811 [2024-07-22 15:05:36.305322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:52:56.811 [2024-07-22 15:05:36.305339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.811 [2024-07-22 15:05:36.305350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:52:56.811 [2024-07-22 15:05:36.305367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.811 [2024-07-22 15:05:36.305378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:52:56.811 [2024-07-22 15:05:36.305395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.811 [2024-07-22 15:05:36.305406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:52:56.811 [2024-07-22 15:05:36.305423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.811 [2024-07-22 15:05:36.305434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:52:56.811 [2024-07-22 15:05:36.305451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.811 [2024-07-22 15:05:36.305463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:52:56.811 [2024-07-22 15:05:36.305480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.811 [2024-07-22 15:05:36.305491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:52:56.811 [2024-07-22 15:05:36.305508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.811 [2024-07-22 15:05:36.305523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:52:56.811 [2024-07-22 15:05:36.305541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.811 [2024-07-22 15:05:36.305552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:52:56.811 [2024-07-22 15:05:36.305569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.811 [2024-07-22 15:05:36.305579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:52:56.811 [2024-07-22 15:05:36.305596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.811 [2024-07-22 15:05:36.305607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.811 [2024-07-22 15:05:36.305624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.811 [2024-07-22 15:05:36.305635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:56.811 [2024-07-22 15:05:36.305652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.811 [2024-07-22 15:05:36.305663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:56.811 [2024-07-22 15:05:36.305692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.812 [2024-07-22 15:05:36.305703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:52:56.812 [2024-07-22 15:05:36.305720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.812 [2024-07-22 15:05:36.305731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:52:56.812 [2024-07-22 15:05:36.305748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.812 [2024-07-22 15:05:36.305759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:52:56.812 [2024-07-22 15:05:36.305776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.812 [2024-07-22 15:05:36.305787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:52:56.812 [2024-07-22 15:05:36.305804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.812 [2024-07-22 15:05:36.305814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:52:56.812 [2024-07-22 15:05:36.305831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.812 [2024-07-22 15:05:36.305842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:52:56.812 [2024-07-22 15:05:36.305859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.812 [2024-07-22 15:05:36.305875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:52:56.812 [2024-07-22 15:05:36.305892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.812 [2024-07-22 15:05:36.305903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:52:56.812 [2024-07-22 15:05:36.305920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.812 [2024-07-22 15:05:36.305931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:52:56.812 [2024-07-22 15:05:36.305948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.812 [2024-07-22 15:05:36.305958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:52:56.812 [2024-07-22 15:05:36.305975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.812 [2024-07-22 15:05:36.305986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:52:56.812 [2024-07-22 15:05:36.306003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.812 [2024-07-22 15:05:36.306014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:52:56.812 [2024-07-22 15:05:36.306031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.812 [2024-07-22 15:05:36.306042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:52:56.812 [2024-07-22 15:05:36.306059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.812 [2024-07-22 15:05:36.306069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:52:56.812 [2024-07-22 15:05:36.306087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.812 [2024-07-22 15:05:36.306097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:52:56.812 [2024-07-22 15:05:36.306114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.812 [2024-07-22 15:05:36.306125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:52:56.812 [2024-07-22 15:05:36.306142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.812 [2024-07-22 15:05:36.306154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:52:56.812 [2024-07-22 15:05:36.306172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.812 [2024-07-22 15:05:36.306183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:52:56.812 [2024-07-22 15:05:36.306734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.812 [2024-07-22 15:05:36.306751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:52:56.812 [2024-07-22 15:05:36.306779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.812 [2024-07-22 15:05:36.306790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:52:56.812 [2024-07-22 15:05:36.306807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.812 [2024-07-22 15:05:36.306818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:52:56.812 [2024-07-22 15:05:36.306835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.812 [2024-07-22 15:05:36.306846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:52:56.812 [2024-07-22 15:05:36.306863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.812 [2024-07-22 15:05:36.306874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:52:56.812 [2024-07-22 15:05:36.306892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.812 [2024-07-22 15:05:36.306902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:52:56.812 [2024-07-22 15:05:36.306920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.812 [2024-07-22 15:05:36.306931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:52:56.812 [2024-07-22 15:05:36.306948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.812 [2024-07-22 15:05:36.306958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:52:56.812 [2024-07-22 15:05:36.306975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.812 [2024-07-22 15:05:36.306986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:52:56.812 [2024-07-22 15:05:36.307004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.812 [2024-07-22 15:05:36.307015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:52:56.812 [2024-07-22 15:05:36.307032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.812 [2024-07-22 15:05:36.307042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:52:56.812 [2024-07-22 15:05:36.307060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.812 [2024-07-22 15:05:36.307071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:52:56.812 [2024-07-22 15:05:36.307088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.812 [2024-07-22 15:05:36.307099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:52:56.812 [2024-07-22 15:05:36.307121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.812 [2024-07-22 15:05:36.307132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:56.812 [2024-07-22 15:05:36.307149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.812 [2024-07-22 15:05:36.307160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:52:56.812 [2024-07-22 15:05:36.307178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.812 [2024-07-22 15:05:36.307188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:52:56.812 [2024-07-22 15:05:36.307205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.812 [2024-07-22 15:05:36.307217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:52:56.812 [2024-07-22 15:05:36.307234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.812 [2024-07-22 15:05:36.307245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:52:56.812 [2024-07-22 15:05:36.307262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.812 [2024-07-22 15:05:36.307273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:52:56.812 [2024-07-22 15:05:36.307290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.812 [2024-07-22 15:05:36.307302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:52:56.812 [2024-07-22 15:05:36.307319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:130896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.812 [2024-07-22 15:05:36.307330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:52:56.812 [2024-07-22 15:05:36.307347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:130904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.813 [2024-07-22 15:05:36.307358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:52:56.813 [2024-07-22 15:05:36.307375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:130912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.813 [2024-07-22 15:05:36.307386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:52:56.813 [2024-07-22 15:05:36.307403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:130920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.813 [2024-07-22 15:05:36.307414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:52:56.813 [2024-07-22 15:05:36.307432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:130928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.813 [2024-07-22 15:05:36.307443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:52:56.813 [2024-07-22 15:05:36.307460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:130936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.813 [2024-07-22 15:05:36.307475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:52:56.813 [2024-07-22 15:05:36.307493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:130944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.813 [2024-07-22 15:05:36.307504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:52:56.813 [2024-07-22 15:05:36.307521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:130952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.813 [2024-07-22 15:05:36.307532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:52:56.813 [2024-07-22 15:05:36.307549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.813 [2024-07-22 15:05:36.307559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:52:56.813 [2024-07-22 15:05:36.307577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:130960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.813 [2024-07-22 15:05:36.307587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:52:56.813 [2024-07-22 15:05:36.307605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:130968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.813 [2024-07-22 15:05:36.307616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:52:56.813 [2024-07-22 15:05:36.307633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:130976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.813 [2024-07-22 15:05:36.307644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:52:56.813 [2024-07-22 15:05:36.307661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:130984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.813 [2024-07-22 15:05:36.307684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:52:56.813 [2024-07-22 15:05:36.307721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:130992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.813 [2024-07-22 15:05:36.307735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:52:56.813 [2024-07-22 15:05:36.307758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:131000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.813 [2024-07-22 15:05:36.307773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:52:56.813 [2024-07-22 15:05:36.307796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:131008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.813 [2024-07-22 15:05:36.307811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:52:56.813 [2024-07-22 15:05:36.307834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:131016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.813 [2024-07-22 15:05:36.307849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:52:56.813 [2024-07-22 15:05:36.307872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:131024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.813 [2024-07-22 15:05:36.307892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:52:56.813 [2024-07-22 15:05:36.307916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:131032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.813 [2024-07-22 15:05:36.307930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:52:56.813 [2024-07-22 15:05:36.307953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:131040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.813 [2024-07-22 15:05:36.307968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:52:56.813 [2024-07-22 15:05:36.307992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:131048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.813 [2024-07-22 15:05:36.308007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:52:56.813 [2024-07-22 15:05:36.308030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:131056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.813 [2024-07-22 15:05:36.308044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:52:56.813 [2024-07-22 15:05:36.308067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:131064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.813 [2024-07-22 15:05:36.308082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:52:56.813 [2024-07-22 15:05:36.308105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:0 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.813 [2024-07-22 15:05:36.308120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:52:56.813 [2024-07-22 15:05:36.308143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:8 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.813 [2024-07-22 15:05:36.308158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:52:56.813 [2024-07-22 15:05:36.308181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.813 [2024-07-22 15:05:36.308196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:56.813 [2024-07-22 15:05:36.308219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.813 [2024-07-22 15:05:36.308234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:52:56.813 [2024-07-22 15:05:36.308257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:32 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.813 [2024-07-22 15:05:36.308272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:52:56.813 [2024-07-22 15:05:36.308295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:40 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.813 [2024-07-22 15:05:36.308311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:52:56.813 [2024-07-22 15:05:36.308334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:48 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.813 [2024-07-22 15:05:36.308348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:52:56.813 [2024-07-22 15:05:36.308378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.813 [2024-07-22 15:05:36.308393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:52:56.813 [2024-07-22 15:05:36.308416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.813 [2024-07-22 15:05:36.308430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:52:56.813 [2024-07-22 15:05:36.308453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.813 [2024-07-22 15:05:36.308468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:52:56.813 [2024-07-22 15:05:36.308491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.813 [2024-07-22 15:05:36.308506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:52:56.813 [2024-07-22 15:05:36.308529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.813 [2024-07-22 15:05:36.308543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:52:56.813 [2024-07-22 15:05:36.308566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.813 [2024-07-22 15:05:36.308581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:52:56.813 [2024-07-22 15:05:36.308614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:56 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.813 [2024-07-22 15:05:36.308629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:52:56.813 [2024-07-22 15:05:36.308653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:64 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.813 [2024-07-22 15:05:36.308680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:52:56.813 [2024-07-22 15:05:36.308705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:72 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.813 [2024-07-22 15:05:36.308719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:52:56.813 [2024-07-22 15:05:36.308743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:80 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.813 [2024-07-22 15:05:36.308757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:52:56.813 [2024-07-22 15:05:36.308780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:88 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.814 [2024-07-22 15:05:36.308795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:52:56.814 [2024-07-22 15:05:36.308818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.814 [2024-07-22 15:05:36.308833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:52:56.814 [2024-07-22 15:05:36.308856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.814 [2024-07-22 15:05:36.308877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:52:56.814 [2024-07-22 15:05:36.308903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.814 [2024-07-22 15:05:36.308918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:52:56.814 [2024-07-22 15:05:36.308941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.814 [2024-07-22 15:05:36.308956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:52:56.814 [2024-07-22 15:05:36.308980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.814 [2024-07-22 15:05:36.308994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:52:56.814 [2024-07-22 15:05:36.309861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.814 [2024-07-22 15:05:36.309886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:52:56.814 [2024-07-22 15:05:36.309912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.814 [2024-07-22 15:05:36.309927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:52:56.814 [2024-07-22 15:05:36.309951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.814 [2024-07-22 15:05:36.309966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:52:56.814 [2024-07-22 15:05:36.309989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.814 [2024-07-22 15:05:36.310003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:52:56.814 [2024-07-22 15:05:36.310027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.814 [2024-07-22 15:05:36.310041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:52:56.814 [2024-07-22 15:05:36.310065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.814 [2024-07-22 15:05:36.310079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:52:56.814 [2024-07-22 15:05:36.310102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.814 [2024-07-22 15:05:36.310117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:52:56.814 [2024-07-22 15:05:36.310140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.814 [2024-07-22 15:05:36.310155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:52:56.814 [2024-07-22 15:05:36.310178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.814 [2024-07-22 15:05:36.310202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:52:56.814 [2024-07-22 15:05:36.310226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.814 [2024-07-22 15:05:36.310240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:52:56.814 [2024-07-22 15:05:36.310263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.814 [2024-07-22 15:05:36.310278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:52:56.814 [2024-07-22 15:05:36.310301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.814 [2024-07-22 15:05:36.310315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:56.814 [2024-07-22 15:05:36.310338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.814 [2024-07-22 15:05:36.310353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:52:56.814 [2024-07-22 15:05:36.310378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.814 [2024-07-22 15:05:36.310392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:52:56.814 [2024-07-22 15:05:36.310415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.814 [2024-07-22 15:05:36.310430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:52:56.814 [2024-07-22 15:05:36.310455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.814 [2024-07-22 15:05:36.310470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:52:56.814 [2024-07-22 15:05:36.310493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.814 [2024-07-22 15:05:36.310507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:52:56.814 [2024-07-22 15:05:36.310530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.814 [2024-07-22 15:05:36.310545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:52:56.814 [2024-07-22 15:05:36.310567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.814 [2024-07-22 15:05:36.310582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:52:56.814 [2024-07-22 15:05:36.310605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.814 [2024-07-22 15:05:36.310619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:52:56.814 [2024-07-22 15:05:36.310642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.814 [2024-07-22 15:05:36.310657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:52:56.814 [2024-07-22 15:05:36.310702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.814 [2024-07-22 15:05:36.310717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:52:56.814 [2024-07-22 15:05:36.310740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.814 [2024-07-22 15:05:36.310754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:52:56.814 [2024-07-22 15:05:36.310777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.814 [2024-07-22 15:05:36.310792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:52:56.814 [2024-07-22 15:05:36.310815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.814 [2024-07-22 15:05:36.310829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:52:56.814 [2024-07-22 15:05:36.310852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.814 [2024-07-22 15:05:36.310867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:52:56.814 [2024-07-22 15:05:36.310890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.814 [2024-07-22 15:05:36.310904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:52:56.814 [2024-07-22 15:05:36.310927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.815 [2024-07-22 15:05:36.310941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:52:56.815 [2024-07-22 15:05:36.310965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.815 [2024-07-22 15:05:36.310979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:52:56.815 [2024-07-22 15:05:36.311004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.815 [2024-07-22 15:05:36.311019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:52:56.815 [2024-07-22 15:05:36.311042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.815 [2024-07-22 15:05:36.311056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:52:56.815 [2024-07-22 15:05:36.311081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.815 [2024-07-22 15:05:36.311096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:52:56.815 [2024-07-22 15:05:36.311119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.815 [2024-07-22 15:05:36.311133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:52:56.815 [2024-07-22 15:05:36.311162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.815 [2024-07-22 15:05:36.311177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:52:56.815 [2024-07-22 15:05:36.311200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.815 [2024-07-22 15:05:36.311215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:52:56.815 [2024-07-22 15:05:36.311238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.815 [2024-07-22 15:05:36.311252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:52:56.815 [2024-07-22 15:05:36.311275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.815 [2024-07-22 15:05:36.311289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:52:56.815 [2024-07-22 15:05:36.311312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.815 [2024-07-22 15:05:36.311327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:52:56.815 [2024-07-22 15:05:36.311350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.815 [2024-07-22 15:05:36.311364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:52:56.815 [2024-07-22 15:05:36.311387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.815 [2024-07-22 15:05:36.311402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:52:56.815 [2024-07-22 15:05:36.311425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.815 [2024-07-22 15:05:36.311439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:52:56.815 [2024-07-22 15:05:36.311462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.815 [2024-07-22 15:05:36.311477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.815 [2024-07-22 15:05:36.311500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.815 [2024-07-22 15:05:36.311514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:56.815 [2024-07-22 15:05:36.311537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.815 [2024-07-22 15:05:36.318694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:56.815 [2024-07-22 15:05:36.318739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.815 [2024-07-22 15:05:36.318758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:52:56.815 [2024-07-22 15:05:36.318790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.815 [2024-07-22 15:05:36.318824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:52:56.815 [2024-07-22 15:05:36.318856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.815 [2024-07-22 15:05:36.318875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:52:56.815 [2024-07-22 15:05:36.318907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.815 [2024-07-22 15:05:36.318927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:52:56.815 [2024-07-22 15:05:36.318958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.815 [2024-07-22 15:05:36.318977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:52:56.815 [2024-07-22 15:05:36.319008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.815 [2024-07-22 15:05:36.319027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:52:56.815 [2024-07-22 15:05:36.319058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.815 [2024-07-22 15:05:36.319077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:52:56.815 [2024-07-22 15:05:36.319108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.815 [2024-07-22 15:05:36.319127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:52:56.815 [2024-07-22 15:05:36.319158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.815 [2024-07-22 15:05:36.319177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:52:56.815 [2024-07-22 15:05:36.319208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.815 [2024-07-22 15:05:36.319227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:52:56.815 [2024-07-22 15:05:36.319258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.815 [2024-07-22 15:05:36.319277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:52:56.815 [2024-07-22 15:05:36.319308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.815 [2024-07-22 15:05:36.319327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:52:56.815 [2024-07-22 15:05:36.319358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.815 [2024-07-22 15:05:36.319378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:52:56.815 [2024-07-22 15:05:36.319409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.815 [2024-07-22 15:05:36.319437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:52:56.815 [2024-07-22 15:05:36.319468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.815 [2024-07-22 15:05:36.319487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:52:56.815 [2024-07-22 15:05:36.319519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.815 [2024-07-22 15:05:36.319539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:52:56.815 [2024-07-22 15:05:36.319570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.815 [2024-07-22 15:05:36.319590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:52:56.815 [2024-07-22 15:05:36.320645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.815 [2024-07-22 15:05:36.320696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:52:56.815 [2024-07-22 15:05:36.320733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.815 [2024-07-22 15:05:36.320754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:52:56.815 [2024-07-22 15:05:36.320785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.815 [2024-07-22 15:05:36.320804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:52:56.815 [2024-07-22 15:05:36.320836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.815 [2024-07-22 15:05:36.320855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:52:56.815 [2024-07-22 15:05:36.320886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.815 [2024-07-22 15:05:36.320906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:52:56.815 [2024-07-22 15:05:36.320937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.816 [2024-07-22 15:05:36.320956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:52:56.816 [2024-07-22 15:05:36.320987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.816 [2024-07-22 15:05:36.321007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:52:56.816 [2024-07-22 15:05:36.321039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.816 [2024-07-22 15:05:36.321058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:52:56.816 [2024-07-22 15:05:36.321089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.816 [2024-07-22 15:05:36.321108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:52:56.816 [2024-07-22 15:05:36.321154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.816 [2024-07-22 15:05:36.321174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:52:56.816 [2024-07-22 15:05:36.321205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.816 [2024-07-22 15:05:36.321225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:52:56.816 [2024-07-22 15:05:36.321256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.816 [2024-07-22 15:05:36.321275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:52:56.816 [2024-07-22 15:05:36.321306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.816 [2024-07-22 15:05:36.321326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:52:56.816 [2024-07-22 15:05:36.321357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.816 [2024-07-22 15:05:36.321376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:52:56.816 [2024-07-22 15:05:36.321407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.816 [2024-07-22 15:05:36.321427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:56.816 [2024-07-22 15:05:36.321458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.816 [2024-07-22 15:05:36.321477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:52:56.816 [2024-07-22 15:05:36.321508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.816 [2024-07-22 15:05:36.321527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:52:56.816 [2024-07-22 15:05:36.321559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.816 [2024-07-22 15:05:36.321578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:52:56.816 [2024-07-22 15:05:36.321609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.816 [2024-07-22 15:05:36.321628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:52:56.816 [2024-07-22 15:05:36.321659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.816 [2024-07-22 15:05:36.321696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:52:56.816 [2024-07-22 15:05:36.321727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.816 [2024-07-22 15:05:36.321747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:52:56.816 [2024-07-22 15:05:36.321778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:130896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.816 [2024-07-22 15:05:36.321819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:52:56.816 [2024-07-22 15:05:36.321852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:130904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.816 [2024-07-22 15:05:36.321877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:52:56.816 [2024-07-22 15:05:36.321909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:130912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.816 [2024-07-22 15:05:36.321932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:52:56.816 [2024-07-22 15:05:36.321965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:130920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.816 [2024-07-22 15:05:36.321990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:52:56.816 [2024-07-22 15:05:36.322029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:130928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.816 [2024-07-22 15:05:36.322057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:52:56.816 [2024-07-22 15:05:36.322090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:130936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.816 [2024-07-22 15:05:36.322114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:52:56.816 [2024-07-22 15:05:36.322146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:130944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.816 [2024-07-22 15:05:36.322170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:52:56.816 [2024-07-22 15:05:36.322202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:130952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.816 [2024-07-22 15:05:36.322226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:52:56.816 [2024-07-22 15:05:36.322258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.816 [2024-07-22 15:05:36.322282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:52:56.816 [2024-07-22 15:05:36.322314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:130960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.816 [2024-07-22 15:05:36.322338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:52:56.816 [2024-07-22 15:05:36.322370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:130968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.816 [2024-07-22 15:05:36.322394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:52:56.816 [2024-07-22 15:05:36.322426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:130976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.816 [2024-07-22 15:05:36.322450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:52:56.816 [2024-07-22 15:05:36.322482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:130984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.816 [2024-07-22 15:05:36.322515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:52:56.816 [2024-07-22 15:05:36.322547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:130992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.816 [2024-07-22 15:05:36.322571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:52:56.816 [2024-07-22 15:05:36.322603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:131000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.816 [2024-07-22 15:05:36.322627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:52:56.816 [2024-07-22 15:05:36.322659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:131008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.816 [2024-07-22 15:05:36.322700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:52:56.816 [2024-07-22 15:05:36.322733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:131016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.816 [2024-07-22 15:05:36.322753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:52:56.816 [2024-07-22 15:05:36.322784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:131024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.816 [2024-07-22 15:05:36.322807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:52:56.816 [2024-07-22 15:05:36.322840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:131032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.816 [2024-07-22 15:05:36.322863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:52:56.816 [2024-07-22 15:05:36.322895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:131040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.816 [2024-07-22 15:05:36.322919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:52:56.816 [2024-07-22 15:05:36.322951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:131048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.816 [2024-07-22 15:05:36.322975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:52:56.816 [2024-07-22 15:05:36.323006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:131056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.816 [2024-07-22 15:05:36.323027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:52:56.816 [2024-07-22 15:05:36.323058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:131064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.816 [2024-07-22 15:05:36.323083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:52:56.817 [2024-07-22 15:05:36.323114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:0 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.817 [2024-07-22 15:05:36.323138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:52:56.817 [2024-07-22 15:05:36.323170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.817 [2024-07-22 15:05:36.323203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:52:56.817 [2024-07-22 15:05:36.323236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:16 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.817 [2024-07-22 15:05:36.323260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:56.817 [2024-07-22 15:05:36.323292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.817 [2024-07-22 15:05:36.323316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:52:56.817 [2024-07-22 15:05:36.323348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:32 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.817 [2024-07-22 15:05:36.323372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:52:56.817 [2024-07-22 15:05:36.323404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:40 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.817 [2024-07-22 15:05:36.323429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:52:56.817 [2024-07-22 15:05:36.323460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:48 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.817 [2024-07-22 15:05:36.323484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:52:56.817 [2024-07-22 15:05:36.323516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.817 [2024-07-22 15:05:36.323540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:52:56.817 [2024-07-22 15:05:36.323572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.817 [2024-07-22 15:05:36.323596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:52:56.817 [2024-07-22 15:05:36.323628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.817 [2024-07-22 15:05:36.323652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:52:56.817 [2024-07-22 15:05:36.323699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.817 [2024-07-22 15:05:36.323724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:52:56.817 [2024-07-22 15:05:36.323756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.817 [2024-07-22 15:05:36.323779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:52:56.817 [2024-07-22 15:05:36.323812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.817 [2024-07-22 15:05:36.323836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:52:56.817 [2024-07-22 15:05:36.323867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:56 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.817 [2024-07-22 15:05:36.323893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:52:56.817 [2024-07-22 15:05:36.323937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:64 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.817 [2024-07-22 15:05:36.323962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:52:56.817 [2024-07-22 15:05:36.323994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:72 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.817 [2024-07-22 15:05:36.324017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:52:56.817 [2024-07-22 15:05:36.324049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:80 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.817 [2024-07-22 15:05:36.324073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:52:56.817 [2024-07-22 15:05:36.324105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:88 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.817 [2024-07-22 15:05:36.324129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:52:56.817 [2024-07-22 15:05:36.324161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:96 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.817 [2024-07-22 15:05:36.324186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:52:56.817 [2024-07-22 15:05:36.324217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.817 [2024-07-22 15:05:36.324242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:52:56.817 [2024-07-22 15:05:36.324274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.817 [2024-07-22 15:05:36.324298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:52:56.817 [2024-07-22 15:05:36.324330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.817 [2024-07-22 15:05:36.324354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:52:56.817 [2024-07-22 15:05:36.325513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.817 [2024-07-22 15:05:36.325560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:52:56.817 [2024-07-22 15:05:36.325597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.817 [2024-07-22 15:05:36.325624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:52:56.817 [2024-07-22 15:05:36.325657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.817 [2024-07-22 15:05:36.325704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:52:56.817 [2024-07-22 15:05:36.325737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.817 [2024-07-22 15:05:36.325762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:52:56.817 [2024-07-22 15:05:36.325794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.817 [2024-07-22 15:05:36.325827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:52:56.817 [2024-07-22 15:05:36.325858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.817 [2024-07-22 15:05:36.325878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:52:56.817 [2024-07-22 15:05:36.325910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.817 [2024-07-22 15:05:36.325930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:52:56.817 [2024-07-22 15:05:36.325960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.817 [2024-07-22 15:05:36.325980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:52:56.817 [2024-07-22 15:05:36.326011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.817 [2024-07-22 15:05:36.326031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:52:56.817 [2024-07-22 15:05:36.326062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.817 [2024-07-22 15:05:36.326081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:52:56.817 [2024-07-22 15:05:36.326112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.817 [2024-07-22 15:05:36.326132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:52:56.817 [2024-07-22 15:05:36.326163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.817 [2024-07-22 15:05:36.326182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:52:56.817 [2024-07-22 15:05:36.326213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.817 [2024-07-22 15:05:36.326232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:56.817 [2024-07-22 15:05:36.326263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.817 [2024-07-22 15:05:36.326283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:52:56.817 [2024-07-22 15:05:36.326313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.817 [2024-07-22 15:05:36.326338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:52:56.817 [2024-07-22 15:05:36.326370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.817 [2024-07-22 15:05:36.326389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:52:56.817 [2024-07-22 15:05:36.326420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.817 [2024-07-22 15:05:36.326452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:52:56.818 [2024-07-22 15:05:36.326485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.818 [2024-07-22 15:05:36.326509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:52:56.818 [2024-07-22 15:05:36.326540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.818 [2024-07-22 15:05:36.326565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:52:56.818 [2024-07-22 15:05:36.326597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.818 [2024-07-22 15:05:36.326621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:52:56.818 [2024-07-22 15:05:36.326652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.818 [2024-07-22 15:05:36.326691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:52:56.818 [2024-07-22 15:05:36.326724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.818 [2024-07-22 15:05:36.326748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:52:56.818 [2024-07-22 15:05:36.326780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.818 [2024-07-22 15:05:36.326800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:52:56.818 [2024-07-22 15:05:36.326831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.818 [2024-07-22 15:05:36.326855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:52:56.818 [2024-07-22 15:05:36.326886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.818 [2024-07-22 15:05:36.326910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:52:56.818 [2024-07-22 15:05:36.326942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.818 [2024-07-22 15:05:36.326962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:52:56.818 [2024-07-22 15:05:36.326993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.818 [2024-07-22 15:05:36.327017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:52:56.818 [2024-07-22 15:05:36.327049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.818 [2024-07-22 15:05:36.327073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:52:56.818 [2024-07-22 15:05:36.327104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.818 [2024-07-22 15:05:36.327128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:52:56.818 [2024-07-22 15:05:36.327173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.818 [2024-07-22 15:05:36.327198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:52:56.818 [2024-07-22 15:05:36.327230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.818 [2024-07-22 15:05:36.327254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:52:56.818 [2024-07-22 15:05:36.327285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.818 [2024-07-22 15:05:36.327304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:52:56.818 [2024-07-22 15:05:36.327335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.818 [2024-07-22 15:05:36.327356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:52:56.818 [2024-07-22 15:05:36.327386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.818 [2024-07-22 15:05:36.327407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:52:56.818 [2024-07-22 15:05:36.327438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.818 [2024-07-22 15:05:36.327462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:52:56.818 [2024-07-22 15:05:36.327493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.818 [2024-07-22 15:05:36.327517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:52:56.818 [2024-07-22 15:05:36.327548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.818 [2024-07-22 15:05:36.327573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:52:56.818 [2024-07-22 15:05:36.327604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.818 [2024-07-22 15:05:36.327628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:52:56.818 [2024-07-22 15:05:36.327660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.818 [2024-07-22 15:05:36.327704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:52:56.818 [2024-07-22 15:05:36.327739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.818 [2024-07-22 15:05:36.327755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:52:56.818 [2024-07-22 15:05:36.327780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.818 [2024-07-22 15:05:36.327794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:52:56.818 [2024-07-22 15:05:36.327826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.818 [2024-07-22 15:05:36.327841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:52:56.818 [2024-07-22 15:05:36.327866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.818 [2024-07-22 15:05:36.327882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.818 [2024-07-22 15:05:36.327907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.818 [2024-07-22 15:05:36.327922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:56.818 [2024-07-22 15:05:36.327946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.818 [2024-07-22 15:05:36.327966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:56.818 [2024-07-22 15:05:36.327991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.818 [2024-07-22 15:05:36.328006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:52:56.818 [2024-07-22 15:05:36.328030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.818 [2024-07-22 15:05:36.328046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:52:56.818 [2024-07-22 15:05:36.328070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.818 [2024-07-22 15:05:36.328086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:52:56.818 [2024-07-22 15:05:36.328110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.818 [2024-07-22 15:05:36.328125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:52:56.818 [2024-07-22 15:05:36.328150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.818 [2024-07-22 15:05:36.328166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:52:56.819 [2024-07-22 15:05:36.328190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.819 [2024-07-22 15:05:36.328205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:52:56.819 [2024-07-22 15:05:36.328230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.819 [2024-07-22 15:05:36.328245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:52:56.819 [2024-07-22 15:05:36.328269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.819 [2024-07-22 15:05:36.328284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:52:56.819 [2024-07-22 15:05:36.328309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.819 [2024-07-22 15:05:36.328334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:52:56.819 [2024-07-22 15:05:36.328359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.819 [2024-07-22 15:05:36.328374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:52:56.819 [2024-07-22 15:05:36.328400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.819 [2024-07-22 15:05:36.328416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:52:56.819 [2024-07-22 15:05:36.328440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.819 [2024-07-22 15:05:36.328455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:52:56.819 [2024-07-22 15:05:36.328479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.819 [2024-07-22 15:05:36.328496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:52:56.819 [2024-07-22 15:05:36.328520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.819 [2024-07-22 15:05:36.328535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:52:56.819 [2024-07-22 15:05:36.328560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.819 [2024-07-22 15:05:36.328575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:52:56.819 [2024-07-22 15:05:36.328608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.819 [2024-07-22 15:05:36.328629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:52:56.819 [2024-07-22 15:05:36.329405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.819 [2024-07-22 15:05:36.329441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:52:56.819 [2024-07-22 15:05:36.329470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.819 [2024-07-22 15:05:36.329487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:52:56.819 [2024-07-22 15:05:36.329512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.819 [2024-07-22 15:05:36.329532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:52:56.819 [2024-07-22 15:05:36.329558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.819 [2024-07-22 15:05:36.329573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:52:56.819 [2024-07-22 15:05:36.329598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.819 [2024-07-22 15:05:36.329621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:52:56.819 [2024-07-22 15:05:36.329657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.819 [2024-07-22 15:05:36.329694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:52:56.819 [2024-07-22 15:05:36.329720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.819 [2024-07-22 15:05:36.329735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:52:56.819 [2024-07-22 15:05:36.329760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.819 [2024-07-22 15:05:36.329775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:52:56.819 [2024-07-22 15:05:36.329799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.819 [2024-07-22 15:05:36.329815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:52:56.819 [2024-07-22 15:05:36.329839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.819 [2024-07-22 15:05:36.329855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:52:56.819 [2024-07-22 15:05:36.329879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.819 [2024-07-22 15:05:36.329894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:52:56.819 [2024-07-22 15:05:36.329919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.819 [2024-07-22 15:05:36.329938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:52:56.819 [2024-07-22 15:05:36.329964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.819 [2024-07-22 15:05:36.329980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:52:56.819 [2024-07-22 15:05:36.330005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.819 [2024-07-22 15:05:36.330020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:52:56.819 [2024-07-22 15:05:36.330045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.819 [2024-07-22 15:05:36.330060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:52:56.819 [2024-07-22 15:05:36.330085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.819 [2024-07-22 15:05:36.330101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:56.819 [2024-07-22 15:05:36.330125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.819 [2024-07-22 15:05:36.330144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:52:56.819 [2024-07-22 15:05:36.330176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.819 [2024-07-22 15:05:36.330196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:52:56.819 [2024-07-22 15:05:36.330222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.819 [2024-07-22 15:05:36.330237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:52:56.819 [2024-07-22 15:05:36.330261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.819 [2024-07-22 15:05:36.330277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:52:56.819 [2024-07-22 15:05:36.330301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.819 [2024-07-22 15:05:36.330316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:52:56.819 [2024-07-22 15:05:36.330341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.819 [2024-07-22 15:05:36.330356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:52:56.819 [2024-07-22 15:05:36.330381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:130896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.819 [2024-07-22 15:05:36.330396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:52:56.819 [2024-07-22 15:05:36.330421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:130904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.819 [2024-07-22 15:05:36.330436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:52:56.819 [2024-07-22 15:05:36.330460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:130912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.819 [2024-07-22 15:05:36.330476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:52:56.819 [2024-07-22 15:05:36.330501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:130920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.819 [2024-07-22 15:05:36.330516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:52:56.819 [2024-07-22 15:05:36.330541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:130928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.819 [2024-07-22 15:05:36.330559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:52:56.819 [2024-07-22 15:05:36.330584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:130936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.819 [2024-07-22 15:05:36.330599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:52:56.820 [2024-07-22 15:05:36.330624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:130944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.820 [2024-07-22 15:05:36.330643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:52:56.820 [2024-07-22 15:05:36.330680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:130952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.820 [2024-07-22 15:05:36.330706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:52:56.820 [2024-07-22 15:05:36.330731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.820 [2024-07-22 15:05:36.330751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:52:56.820 [2024-07-22 15:05:36.330776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:130960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.820 [2024-07-22 15:05:36.330795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:52:56.820 [2024-07-22 15:05:36.330820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:130968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.820 [2024-07-22 15:05:36.330835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:52:56.820 [2024-07-22 15:05:36.330860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:130976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.820 [2024-07-22 15:05:36.330879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:52:56.820 [2024-07-22 15:05:36.330903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:130984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.820 [2024-07-22 15:05:36.330919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:52:56.820 [2024-07-22 15:05:36.330943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:130992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.820 [2024-07-22 15:05:36.330958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:52:56.820 [2024-07-22 15:05:36.330982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:131000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.820 [2024-07-22 15:05:36.330998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:52:56.820 [2024-07-22 15:05:36.331022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:131008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.820 [2024-07-22 15:05:36.331041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:52:56.820 [2024-07-22 15:05:36.331067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:131016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.820 [2024-07-22 15:05:36.331083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:52:56.820 [2024-07-22 15:05:36.331108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:131024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.820 [2024-07-22 15:05:36.331126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:52:56.820 [2024-07-22 15:05:36.331151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:131032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.820 [2024-07-22 15:05:36.331171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:52:56.820 [2024-07-22 15:05:36.331196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:131040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.820 [2024-07-22 15:05:36.331224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:52:56.820 [2024-07-22 15:05:36.331250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:131048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.820 [2024-07-22 15:05:36.331269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:52:56.820 [2024-07-22 15:05:36.331293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:131056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.820 [2024-07-22 15:05:36.331309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:52:56.820 [2024-07-22 15:05:36.331333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:131064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.820 [2024-07-22 15:05:36.331350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:52:56.820 [2024-07-22 15:05:36.331374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:0 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.820 [2024-07-22 15:05:36.331393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:52:56.820 [2024-07-22 15:05:36.331418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.820 [2024-07-22 15:05:36.331434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:52:56.820 [2024-07-22 15:05:36.331458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:16 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.820 [2024-07-22 15:05:36.331478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:56.820 [2024-07-22 15:05:36.331503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:24 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.820 [2024-07-22 15:05:36.331522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:52:56.820 [2024-07-22 15:05:36.331547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:32 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.820 [2024-07-22 15:05:36.331562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:52:56.820 [2024-07-22 15:05:36.331587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:40 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.820 [2024-07-22 15:05:36.331605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:52:56.820 [2024-07-22 15:05:36.331631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:48 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.820 [2024-07-22 15:05:36.331646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:52:56.820 [2024-07-22 15:05:36.331682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.820 [2024-07-22 15:05:36.331698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:52:56.820 [2024-07-22 15:05:36.331733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.820 [2024-07-22 15:05:36.331748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:52:56.820 [2024-07-22 15:05:36.331779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.820 [2024-07-22 15:05:36.331798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:52:56.820 [2024-07-22 15:05:36.331823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.820 [2024-07-22 15:05:36.331839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:52:56.820 [2024-07-22 15:05:36.331863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.820 [2024-07-22 15:05:36.331879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:52:56.820 [2024-07-22 15:05:36.331903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.820 [2024-07-22 15:05:36.331918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:52:56.820 [2024-07-22 15:05:36.331943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:56 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.820 [2024-07-22 15:05:36.331959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:52:56.820 [2024-07-22 15:05:36.331984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:64 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.820 [2024-07-22 15:05:36.332000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:52:56.820 [2024-07-22 15:05:36.332024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:72 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.820 [2024-07-22 15:05:36.332040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:52:56.820 [2024-07-22 15:05:36.332064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:80 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.820 [2024-07-22 15:05:36.332079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:52:56.820 [2024-07-22 15:05:36.332104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:88 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.820 [2024-07-22 15:05:36.332120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:52:56.820 [2024-07-22 15:05:36.332145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:96 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.820 [2024-07-22 15:05:36.332161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:52:56.820 [2024-07-22 15:05:36.332185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.820 [2024-07-22 15:05:36.332201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:52:56.820 [2024-07-22 15:05:36.332226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.820 [2024-07-22 15:05:36.332245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:52:56.820 [2024-07-22 15:05:36.333172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.821 [2024-07-22 15:05:36.333210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:52:56.821 [2024-07-22 15:05:36.333240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.821 [2024-07-22 15:05:36.333256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:52:56.821 [2024-07-22 15:05:36.333281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.821 [2024-07-22 15:05:36.333298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:52:56.821 [2024-07-22 15:05:36.333322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.821 [2024-07-22 15:05:36.333338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:52:56.821 [2024-07-22 15:05:36.333363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.821 [2024-07-22 15:05:36.333380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:52:56.821 [2024-07-22 15:05:36.333404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.821 [2024-07-22 15:05:36.333420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:52:56.821 [2024-07-22 15:05:36.333445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.821 [2024-07-22 15:05:36.333468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:52:56.821 [2024-07-22 15:05:36.333494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.821 [2024-07-22 15:05:36.333509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:52:56.821 [2024-07-22 15:05:36.333533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.821 [2024-07-22 15:05:36.333553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:52:56.821 [2024-07-22 15:05:36.333578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.821 [2024-07-22 15:05:36.333594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:52:56.821 [2024-07-22 15:05:36.333618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.821 [2024-07-22 15:05:36.333634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:52:56.821 [2024-07-22 15:05:36.333659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.821 [2024-07-22 15:05:36.333693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:52:56.821 [2024-07-22 15:05:36.333718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.821 [2024-07-22 15:05:36.333744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:52:56.821 [2024-07-22 15:05:36.333769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.821 [2024-07-22 15:05:36.333785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:56.821 [2024-07-22 15:05:36.333810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.821 [2024-07-22 15:05:36.333826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:52:56.821 [2024-07-22 15:05:36.333850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.821 [2024-07-22 15:05:36.333867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:52:56.821 [2024-07-22 15:05:36.333899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.821 [2024-07-22 15:05:36.333915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:52:56.821 [2024-07-22 15:05:36.333939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.821 [2024-07-22 15:05:36.333955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:52:56.821 [2024-07-22 15:05:36.333979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.821 [2024-07-22 15:05:36.333998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:52:56.821 [2024-07-22 15:05:36.334023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.821 [2024-07-22 15:05:36.334040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:52:56.821 [2024-07-22 15:05:36.334065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.821 [2024-07-22 15:05:36.334084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:52:56.821 [2024-07-22 15:05:36.334109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.821 [2024-07-22 15:05:36.334128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:52:56.821 [2024-07-22 15:05:36.334154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.821 [2024-07-22 15:05:36.334170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:52:56.821 [2024-07-22 15:05:36.334195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.821 [2024-07-22 15:05:36.334214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:52:56.821 [2024-07-22 15:05:36.334239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.821 [2024-07-22 15:05:36.334258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:52:56.821 [2024-07-22 15:05:36.334291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.821 [2024-07-22 15:05:36.334308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:52:56.821 [2024-07-22 15:05:36.334332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.821 [2024-07-22 15:05:36.334347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:52:56.821 [2024-07-22 15:05:36.334372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.821 [2024-07-22 15:05:36.334388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:52:56.821 [2024-07-22 15:05:36.334412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.821 [2024-07-22 15:05:36.334432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:52:56.821 [2024-07-22 15:05:36.334456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.821 [2024-07-22 15:05:36.334472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:52:56.821 [2024-07-22 15:05:36.334496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.821 [2024-07-22 15:05:36.334516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:52:56.821 [2024-07-22 15:05:36.334541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.821 [2024-07-22 15:05:36.334560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:52:56.821 [2024-07-22 15:05:36.334585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.821 [2024-07-22 15:05:36.334604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:52:56.821 [2024-07-22 15:05:36.334629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.821 [2024-07-22 15:05:36.334645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:52:56.821 [2024-07-22 15:05:36.334681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.821 [2024-07-22 15:05:36.334714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:52:56.821 [2024-07-22 15:05:36.334739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.821 [2024-07-22 15:05:36.334754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:52:56.821 [2024-07-22 15:05:36.334779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.821 [2024-07-22 15:05:36.334798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:52:56.821 [2024-07-22 15:05:36.334830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.821 [2024-07-22 15:05:36.334847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:52:56.821 [2024-07-22 15:05:36.334871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.821 [2024-07-22 15:05:36.334888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:52:56.821 [2024-07-22 15:05:36.334912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.822 [2024-07-22 15:05:36.334928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:52:56.822 [2024-07-22 15:05:36.334952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.822 [2024-07-22 15:05:36.334967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:52:56.822 [2024-07-22 15:05:36.334992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.822 [2024-07-22 15:05:36.335007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:52:56.822 [2024-07-22 15:05:36.335032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.822 [2024-07-22 15:05:36.335051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:52:56.822 [2024-07-22 15:05:36.335076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.822 [2024-07-22 15:05:36.335092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.822 [2024-07-22 15:05:36.335116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.822 [2024-07-22 15:05:36.335135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:56.822 [2024-07-22 15:05:36.335160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.822 [2024-07-22 15:05:36.335179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:56.822 [2024-07-22 15:05:36.335205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.822 [2024-07-22 15:05:36.335220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:52:56.822 [2024-07-22 15:05:36.335245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.822 [2024-07-22 15:05:36.335264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:52:56.822 [2024-07-22 15:05:36.335289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.822 [2024-07-22 15:05:36.335305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:52:56.822 [2024-07-22 15:05:36.335330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.822 [2024-07-22 15:05:36.335355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:52:56.822 [2024-07-22 15:05:36.335380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.822 [2024-07-22 15:05:36.335396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:52:56.822 [2024-07-22 15:05:36.335421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.822 [2024-07-22 15:05:36.335436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:52:56.822 [2024-07-22 15:05:36.335461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.822 [2024-07-22 15:05:36.335480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:52:56.822 [2024-07-22 15:05:36.335505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.822 [2024-07-22 15:05:36.335524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:52:56.822 [2024-07-22 15:05:36.335549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.822 [2024-07-22 15:05:36.335565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:52:56.822 [2024-07-22 15:05:36.335589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.822 [2024-07-22 15:05:36.335608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:52:56.822 [2024-07-22 15:05:36.335633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.822 [2024-07-22 15:05:36.335652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:52:56.822 [2024-07-22 15:05:36.335687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.822 [2024-07-22 15:05:36.335707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:52:56.822 [2024-07-22 15:05:36.335732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.822 [2024-07-22 15:05:36.335752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:52:56.822 [2024-07-22 15:05:36.335777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.822 [2024-07-22 15:05:36.335793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:52:56.822 [2024-07-22 15:05:36.335819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.822 [2024-07-22 15:05:36.335838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:52:56.822 [2024-07-22 15:05:36.336613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.822 [2024-07-22 15:05:36.336658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:52:56.822 [2024-07-22 15:05:36.336707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.822 [2024-07-22 15:05:36.336730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:52:56.822 [2024-07-22 15:05:36.336756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.822 [2024-07-22 15:05:36.336772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:52:56.822 [2024-07-22 15:05:36.336796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.822 [2024-07-22 15:05:36.336812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:52:56.822 [2024-07-22 15:05:36.336836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.822 [2024-07-22 15:05:36.336852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:52:56.822 [2024-07-22 15:05:36.336876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.822 [2024-07-22 15:05:36.336896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:52:56.822 [2024-07-22 15:05:36.336921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.822 [2024-07-22 15:05:36.336937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:52:56.822 [2024-07-22 15:05:36.336961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.822 [2024-07-22 15:05:36.336976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:52:56.822 [2024-07-22 15:05:36.337001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.822 [2024-07-22 15:05:36.337020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:52:56.822 [2024-07-22 15:05:36.337045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.822 [2024-07-22 15:05:36.337060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:52:56.822 [2024-07-22 15:05:36.337084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.822 [2024-07-22 15:05:36.337103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:52:56.822 [2024-07-22 15:05:36.337128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.822 [2024-07-22 15:05:36.337144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:52:56.822 [2024-07-22 15:05:36.337169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.822 [2024-07-22 15:05:36.337184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:52:56.822 [2024-07-22 15:05:36.337217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.822 [2024-07-22 15:05:36.337233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:52:56.822 [2024-07-22 15:05:36.337257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.822 [2024-07-22 15:05:36.337276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:52:56.822 [2024-07-22 15:05:36.337301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.822 [2024-07-22 15:05:36.337320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:52:56.822 [2024-07-22 15:05:36.337345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.822 [2024-07-22 15:05:36.337361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:56.822 [2024-07-22 15:05:36.337385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.823 [2024-07-22 15:05:36.337404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:52:56.823 [2024-07-22 15:05:36.337429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.823 [2024-07-22 15:05:36.337448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:52:56.823 [2024-07-22 15:05:36.337473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.823 [2024-07-22 15:05:36.337488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:52:56.823 [2024-07-22 15:05:36.337513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.823 [2024-07-22 15:05:36.337532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:52:56.823 [2024-07-22 15:05:36.337556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.823 [2024-07-22 15:05:36.337572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:52:56.823 [2024-07-22 15:05:36.337596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.823 [2024-07-22 15:05:36.337612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:52:56.823 [2024-07-22 15:05:36.337637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:130896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.823 [2024-07-22 15:05:36.337665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:52:56.823 [2024-07-22 15:05:36.337691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:130904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.823 [2024-07-22 15:05:36.337702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:52:56.823 [2024-07-22 15:05:36.337722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:130912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.823 [2024-07-22 15:05:36.337733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:52:56.823 [2024-07-22 15:05:36.337749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:130920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.823 [2024-07-22 15:05:36.337759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:52:56.823 [2024-07-22 15:05:36.337776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:130928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.823 [2024-07-22 15:05:36.337786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:52:56.823 [2024-07-22 15:05:36.337802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:130936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.823 [2024-07-22 15:05:36.337812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:52:56.823 [2024-07-22 15:05:36.337828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:130944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.823 [2024-07-22 15:05:36.337838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:52:56.823 [2024-07-22 15:05:36.337854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:130952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.823 [2024-07-22 15:05:36.337864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:52:56.823 [2024-07-22 15:05:36.337880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.823 [2024-07-22 15:05:36.337890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:52:56.823 [2024-07-22 15:05:36.337906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:130960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.823 [2024-07-22 15:05:36.337916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:52:56.823 [2024-07-22 15:05:36.337932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:130968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.823 [2024-07-22 15:05:36.337942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:52:56.823 [2024-07-22 15:05:36.337958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:130976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.823 [2024-07-22 15:05:36.337968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:52:56.823 [2024-07-22 15:05:36.337984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:130984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.823 [2024-07-22 15:05:36.337994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:52:56.823 [2024-07-22 15:05:36.338010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:130992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.823 [2024-07-22 15:05:36.338020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:52:56.823 [2024-07-22 15:05:36.338036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:131000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.823 [2024-07-22 15:05:36.338051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:52:56.823 [2024-07-22 15:05:36.338068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:131008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.823 [2024-07-22 15:05:36.338078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:52:56.823 [2024-07-22 15:05:36.338094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:131016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.823 [2024-07-22 15:05:36.338104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:52:56.823 [2024-07-22 15:05:36.338120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:131024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.823 [2024-07-22 15:05:36.338130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:52:56.823 [2024-07-22 15:05:36.338146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:131032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.823 [2024-07-22 15:05:36.338156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:52:56.823 [2024-07-22 15:05:36.338172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:131040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.823 [2024-07-22 15:05:36.338182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:52:56.823 [2024-07-22 15:05:36.338199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:131048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.823 [2024-07-22 15:05:36.338209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:52:56.823 [2024-07-22 15:05:36.338225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:131056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.823 [2024-07-22 15:05:36.338235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:52:56.823 [2024-07-22 15:05:36.338251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:131064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.823 [2024-07-22 15:05:36.338261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:52:56.823 [2024-07-22 15:05:36.338277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:0 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.823 [2024-07-22 15:05:36.338288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:52:56.823 [2024-07-22 15:05:36.338304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.823 [2024-07-22 15:05:36.338314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:52:56.823 [2024-07-22 15:05:36.338330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:16 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.823 [2024-07-22 15:05:36.338340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:56.823 [2024-07-22 15:05:36.338356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.823 [2024-07-22 15:05:36.338370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:52:56.823 [2024-07-22 15:05:36.338386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:32 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.824 [2024-07-22 15:05:36.338396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:52:56.824 [2024-07-22 15:05:36.338413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.824 [2024-07-22 15:05:36.338423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:52:56.824 [2024-07-22 15:05:36.338439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:48 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.824 [2024-07-22 15:05:36.338449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:52:56.824 [2024-07-22 15:05:36.338466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.824 [2024-07-22 15:05:36.338476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:52:56.824 [2024-07-22 15:05:36.338492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.824 [2024-07-22 15:05:36.338502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:52:56.824 [2024-07-22 15:05:36.338518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.824 [2024-07-22 15:05:36.338528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:52:56.824 [2024-07-22 15:05:36.338544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.824 [2024-07-22 15:05:36.338554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:52:56.824 [2024-07-22 15:05:36.338570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.824 [2024-07-22 15:05:36.338581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:52:56.824 [2024-07-22 15:05:36.338597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.824 [2024-07-22 15:05:36.338607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:52:56.824 [2024-07-22 15:05:36.338623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:56 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.824 [2024-07-22 15:05:36.338633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:52:56.824 [2024-07-22 15:05:36.338650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:64 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.824 [2024-07-22 15:05:36.338660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:52:56.824 [2024-07-22 15:05:36.338684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:72 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.824 [2024-07-22 15:05:36.338695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:52:56.824 [2024-07-22 15:05:36.338716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:80 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.824 [2024-07-22 15:05:36.338727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:52:56.824 [2024-07-22 15:05:36.338743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:88 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.824 [2024-07-22 15:05:36.338753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:52:56.824 [2024-07-22 15:05:36.338770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:96 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.824 [2024-07-22 15:05:36.338780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:52:56.824 [2024-07-22 15:05:36.338796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.824 [2024-07-22 15:05:36.338806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:52:56.824 [2024-07-22 15:05:36.339385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.824 [2024-07-22 15:05:36.339409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:52:56.824 [2024-07-22 15:05:36.339428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.824 [2024-07-22 15:05:36.339439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:52:56.824 [2024-07-22 15:05:36.339455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.824 [2024-07-22 15:05:36.339466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:52:56.824 [2024-07-22 15:05:36.339483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.824 [2024-07-22 15:05:36.339493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:52:56.824 [2024-07-22 15:05:36.339509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.824 [2024-07-22 15:05:36.339520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:52:56.824 [2024-07-22 15:05:36.339536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.824 [2024-07-22 15:05:36.339546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:52:56.824 [2024-07-22 15:05:36.339563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.824 [2024-07-22 15:05:36.339573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:52:56.824 [2024-07-22 15:05:36.339589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.824 [2024-07-22 15:05:36.339599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:52:56.824 [2024-07-22 15:05:36.339615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.824 [2024-07-22 15:05:36.339633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:52:56.824 [2024-07-22 15:05:36.339649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.824 [2024-07-22 15:05:36.339660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:52:56.824 [2024-07-22 15:05:36.339686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.824 [2024-07-22 15:05:36.339697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:52:56.824 [2024-07-22 15:05:36.339713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.824 [2024-07-22 15:05:36.339723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:52:56.824 [2024-07-22 15:05:36.339740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.824 [2024-07-22 15:05:36.339750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:52:56.824 [2024-07-22 15:05:36.339766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.824 [2024-07-22 15:05:36.339776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:52:56.824 [2024-07-22 15:05:36.339792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.824 [2024-07-22 15:05:36.339802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:56.824 [2024-07-22 15:05:36.339818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.824 [2024-07-22 15:05:36.339828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:52:56.824 [2024-07-22 15:05:36.339844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.824 [2024-07-22 15:05:36.339855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:52:56.824 [2024-07-22 15:05:36.339871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.824 [2024-07-22 15:05:36.339881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:52:56.824 [2024-07-22 15:05:36.339897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.824 [2024-07-22 15:05:36.339907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:52:56.824 [2024-07-22 15:05:36.339924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.824 [2024-07-22 15:05:36.339934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:52:56.824 [2024-07-22 15:05:36.339950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.824 [2024-07-22 15:05:36.339965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:52:56.824 [2024-07-22 15:05:36.339981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.824 [2024-07-22 15:05:36.339992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:52:56.824 [2024-07-22 15:05:36.340008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.824 [2024-07-22 15:05:36.340018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:52:56.824 [2024-07-22 15:05:36.340034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.825 [2024-07-22 15:05:36.340044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:52:56.825 [2024-07-22 15:05:36.340060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.825 [2024-07-22 15:05:36.340071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:52:56.825 [2024-07-22 15:05:36.340086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.825 [2024-07-22 15:05:36.340097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:52:56.825 [2024-07-22 15:05:36.340112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.825 [2024-07-22 15:05:36.340122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:52:56.825 [2024-07-22 15:05:36.340139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.825 [2024-07-22 15:05:36.340149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:52:56.825 [2024-07-22 15:05:36.340165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.825 [2024-07-22 15:05:36.340175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:52:56.825 [2024-07-22 15:05:36.340192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.825 [2024-07-22 15:05:36.340202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:52:56.825 [2024-07-22 15:05:36.340218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.825 [2024-07-22 15:05:36.340228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:52:56.825 [2024-07-22 15:05:36.340244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.825 [2024-07-22 15:05:36.340254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:52:56.825 [2024-07-22 15:05:36.340271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.825 [2024-07-22 15:05:36.340281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:52:56.825 [2024-07-22 15:05:36.340303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.825 [2024-07-22 15:05:36.340313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:52:56.825 [2024-07-22 15:05:36.340330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.825 [2024-07-22 15:05:36.340340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:52:56.825 [2024-07-22 15:05:36.340356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.825 [2024-07-22 15:05:36.340366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:52:56.825 [2024-07-22 15:05:36.340382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.825 [2024-07-22 15:05:36.340392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:52:56.825 [2024-07-22 15:05:36.340408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.825 [2024-07-22 15:05:36.340418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:52:56.825 [2024-07-22 15:05:36.340434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.825 [2024-07-22 15:05:36.340444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:52:56.825 [2024-07-22 15:05:36.340460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.825 [2024-07-22 15:05:36.340470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:52:56.825 [2024-07-22 15:05:36.340487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.825 [2024-07-22 15:05:36.340497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:52:56.825 [2024-07-22 15:05:36.340513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.825 [2024-07-22 15:05:36.340523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:52:56.825 [2024-07-22 15:05:36.340540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.825 [2024-07-22 15:05:36.340550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:52:56.825 [2024-07-22 15:05:36.340566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.825 [2024-07-22 15:05:36.340576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:52:56.825 [2024-07-22 15:05:36.340592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.825 [2024-07-22 15:05:36.340610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.825 [2024-07-22 15:05:36.340641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.825 [2024-07-22 15:05:36.340651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:56.825 [2024-07-22 15:05:36.340675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.825 [2024-07-22 15:05:36.340686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:56.825 [2024-07-22 15:05:36.340702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.825 [2024-07-22 15:05:36.340713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:52:56.825 [2024-07-22 15:05:36.340729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.825 [2024-07-22 15:05:36.340739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:52:56.825 [2024-07-22 15:05:36.340755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.825 [2024-07-22 15:05:36.340765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:52:56.825 [2024-07-22 15:05:36.340782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.825 [2024-07-22 15:05:36.340792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:52:56.825 [2024-07-22 15:05:36.340808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.825 [2024-07-22 15:05:36.340818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:52:56.825 [2024-07-22 15:05:36.340834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.825 [2024-07-22 15:05:36.340844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:52:56.825 [2024-07-22 15:05:36.340860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.825 [2024-07-22 15:05:36.340871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:52:56.825 [2024-07-22 15:05:36.340886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.825 [2024-07-22 15:05:36.340896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:52:56.825 [2024-07-22 15:05:36.340913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.825 [2024-07-22 15:05:36.340923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:52:56.825 [2024-07-22 15:05:36.340939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.825 [2024-07-22 15:05:36.340949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:52:56.825 [2024-07-22 15:05:36.340965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.825 [2024-07-22 15:05:36.340980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:52:56.825 [2024-07-22 15:05:36.340996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.825 [2024-07-22 15:05:36.341006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:52:56.825 [2024-07-22 15:05:36.341022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.825 [2024-07-22 15:05:36.341033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:52:56.825 [2024-07-22 15:05:36.341049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.825 [2024-07-22 15:05:36.341060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:52:56.825 [2024-07-22 15:05:36.341564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.825 [2024-07-22 15:05:36.341580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:52:56.825 [2024-07-22 15:05:36.341597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.826 [2024-07-22 15:05:36.341609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:52:56.826 [2024-07-22 15:05:36.341625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.826 [2024-07-22 15:05:36.341635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:52:56.826 [2024-07-22 15:05:36.341652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.826 [2024-07-22 15:05:36.341662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:52:56.826 [2024-07-22 15:05:36.341688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.826 [2024-07-22 15:05:36.341699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:52:56.826 [2024-07-22 15:05:36.341715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.826 [2024-07-22 15:05:36.341725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:52:56.826 [2024-07-22 15:05:36.341741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.826 [2024-07-22 15:05:36.341752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:52:56.826 [2024-07-22 15:05:36.341768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.826 [2024-07-22 15:05:36.341779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:52:56.826 [2024-07-22 15:05:36.341794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.826 [2024-07-22 15:05:36.341805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:52:56.826 [2024-07-22 15:05:36.341828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.826 [2024-07-22 15:05:36.341838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:52:56.826 [2024-07-22 15:05:36.341854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.826 [2024-07-22 15:05:36.341865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:52:56.826 [2024-07-22 15:05:36.341880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.826 [2024-07-22 15:05:36.341891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:52:56.826 [2024-07-22 15:05:36.341907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.826 [2024-07-22 15:05:36.341917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:52:56.826 [2024-07-22 15:05:36.341933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.826 [2024-07-22 15:05:36.341943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:52:56.826 [2024-07-22 15:05:36.341959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.826 [2024-07-22 15:05:36.341969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:52:56.826 [2024-07-22 15:05:36.341985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.826 [2024-07-22 15:05:36.341995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:52:56.826 [2024-07-22 15:05:36.342012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.826 [2024-07-22 15:05:36.342022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:52:56.826 [2024-07-22 15:05:36.342038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.826 [2024-07-22 15:05:36.342048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:56.826 [2024-07-22 15:05:36.342064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.826 [2024-07-22 15:05:36.342074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:52:56.826 [2024-07-22 15:05:36.342090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.826 [2024-07-22 15:05:36.342100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:52:56.826 [2024-07-22 15:05:36.342116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.826 [2024-07-22 15:05:36.342126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:52:56.826 [2024-07-22 15:05:36.342147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.826 [2024-07-22 15:05:36.342157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:52:56.826 [2024-07-22 15:05:36.342173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.826 [2024-07-22 15:05:36.342183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:52:56.826 [2024-07-22 15:05:36.342199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.826 [2024-07-22 15:05:36.342209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:52:56.826 [2024-07-22 15:05:36.342226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:130896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.826 [2024-07-22 15:05:36.342236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:52:56.826 [2024-07-22 15:05:36.342252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:130904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.826 [2024-07-22 15:05:36.342262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:52:56.826 [2024-07-22 15:05:36.342278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:130912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.826 [2024-07-22 15:05:36.342288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:52:56.826 [2024-07-22 15:05:36.342304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:130920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.826 [2024-07-22 15:05:36.342314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:52:56.826 [2024-07-22 15:05:36.342331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:130928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.826 [2024-07-22 15:05:36.342341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:52:56.826 [2024-07-22 15:05:36.342356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:130936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.826 [2024-07-22 15:05:36.342367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:52:56.826 [2024-07-22 15:05:36.342383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:130944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.826 [2024-07-22 15:05:36.342393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:52:56.826 [2024-07-22 15:05:36.342409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:130952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.826 [2024-07-22 15:05:36.342419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:52:56.826 [2024-07-22 15:05:36.342435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.826 [2024-07-22 15:05:36.342446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:52:56.826 [2024-07-22 15:05:36.342462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:130960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.826 [2024-07-22 15:05:36.342480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:52:56.826 [2024-07-22 15:05:36.342496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:130968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.826 [2024-07-22 15:05:36.342506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:52:56.826 [2024-07-22 15:05:36.342523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:130976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.826 [2024-07-22 15:05:36.342533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:52:56.826 [2024-07-22 15:05:36.342548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:130984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.826 [2024-07-22 15:05:36.342559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:52:56.826 [2024-07-22 15:05:36.342575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:130992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.826 [2024-07-22 15:05:36.342585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:52:56.826 [2024-07-22 15:05:36.342601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:131000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.826 [2024-07-22 15:05:36.342611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:52:56.826 [2024-07-22 15:05:36.342628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:131008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.826 [2024-07-22 15:05:36.342638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:52:56.826 [2024-07-22 15:05:36.342654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:131016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.827 [2024-07-22 15:05:36.342664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:52:56.827 [2024-07-22 15:05:36.342689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:131024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.827 [2024-07-22 15:05:36.342699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:52:56.827 [2024-07-22 15:05:36.342716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:131032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.827 [2024-07-22 15:05:36.342726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:52:56.827 [2024-07-22 15:05:36.342742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:131040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.827 [2024-07-22 15:05:36.342752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:52:56.827 [2024-07-22 15:05:36.342768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:131048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.827 [2024-07-22 15:05:36.342778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:52:56.827 [2024-07-22 15:05:36.342794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:131056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.827 [2024-07-22 15:05:36.342809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:52:56.827 [2024-07-22 15:05:36.342825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:131064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.827 [2024-07-22 15:05:36.342835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:52:56.827 [2024-07-22 15:05:36.342852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:0 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.827 [2024-07-22 15:05:36.342862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:52:56.827 [2024-07-22 15:05:36.342878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.827 [2024-07-22 15:05:36.342888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:52:56.827 [2024-07-22 15:05:36.342904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.827 [2024-07-22 15:05:36.342914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:56.827 [2024-07-22 15:05:36.342931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.827 [2024-07-22 15:05:36.342941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:52:56.827 [2024-07-22 15:05:36.342957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:32 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.827 [2024-07-22 15:05:36.342967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:52:56.827 [2024-07-22 15:05:36.342984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:40 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.827 [2024-07-22 15:05:36.342994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:52:56.827 [2024-07-22 15:05:36.343010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:48 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.827 [2024-07-22 15:05:36.343020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:52:56.827 [2024-07-22 15:05:36.343037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.827 [2024-07-22 15:05:36.343047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:52:56.827 [2024-07-22 15:05:36.343063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.827 [2024-07-22 15:05:36.343073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:52:56.827 [2024-07-22 15:05:36.343089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.827 [2024-07-22 15:05:36.343100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:52:56.827 [2024-07-22 15:05:36.343116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.827 [2024-07-22 15:05:36.343126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:52:56.827 [2024-07-22 15:05:36.343147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.827 [2024-07-22 15:05:36.343157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:52:56.827 [2024-07-22 15:05:36.343174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.827 [2024-07-22 15:05:36.343184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:52:56.827 [2024-07-22 15:05:36.343201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:56 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.827 [2024-07-22 15:05:36.343211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:52:56.827 [2024-07-22 15:05:36.343227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.827 [2024-07-22 15:05:36.343238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:52:56.827 [2024-07-22 15:05:36.343254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:72 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.827 [2024-07-22 15:05:36.343265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:52:56.827 [2024-07-22 15:05:36.343281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:80 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.827 [2024-07-22 15:05:36.343291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:52:56.827 [2024-07-22 15:05:36.343307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:88 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.827 [2024-07-22 15:05:36.343317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:52:56.827 [2024-07-22 15:05:36.343333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.827 [2024-07-22 15:05:36.343344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:52:56.827 [2024-07-22 15:05:36.343943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.827 [2024-07-22 15:05:36.343962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:52:56.827 [2024-07-22 15:05:36.343980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.827 [2024-07-22 15:05:36.343991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:52:56.827 [2024-07-22 15:05:36.344007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.827 [2024-07-22 15:05:36.344018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:52:56.827 [2024-07-22 15:05:36.344034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.827 [2024-07-22 15:05:36.344044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:52:56.827 [2024-07-22 15:05:36.344067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.827 [2024-07-22 15:05:36.344078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:52:56.827 [2024-07-22 15:05:36.344094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.827 [2024-07-22 15:05:36.344104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:52:56.827 [2024-07-22 15:05:36.344121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.827 [2024-07-22 15:05:36.344131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:52:56.827 [2024-07-22 15:05:36.344147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.827 [2024-07-22 15:05:36.344157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:52:56.828 [2024-07-22 15:05:36.344173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.828 [2024-07-22 15:05:36.344183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:52:56.828 [2024-07-22 15:05:36.344200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.828 [2024-07-22 15:05:36.344210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:52:56.828 [2024-07-22 15:05:36.344226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.828 [2024-07-22 15:05:36.344237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:52:56.828 [2024-07-22 15:05:36.344253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.828 [2024-07-22 15:05:36.344263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:52:56.828 [2024-07-22 15:05:36.344279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.828 [2024-07-22 15:05:36.344289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:52:56.828 [2024-07-22 15:05:36.344305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.828 [2024-07-22 15:05:36.344316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:52:56.828 [2024-07-22 15:05:36.344332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.828 [2024-07-22 15:05:36.344342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:52:56.828 [2024-07-22 15:05:36.344358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.828 [2024-07-22 15:05:36.344368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:56.828 [2024-07-22 15:05:36.344384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.828 [2024-07-22 15:05:36.344399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:52:56.828 [2024-07-22 15:05:36.344415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.828 [2024-07-22 15:05:36.344425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:52:56.828 [2024-07-22 15:05:36.344441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.828 [2024-07-22 15:05:36.344452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:52:56.828 [2024-07-22 15:05:36.344468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.828 [2024-07-22 15:05:36.344478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:52:56.828 [2024-07-22 15:05:36.344494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.828 [2024-07-22 15:05:36.344504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:52:56.828 [2024-07-22 15:05:36.344520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.828 [2024-07-22 15:05:36.344531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:52:56.828 [2024-07-22 15:05:36.344547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.828 [2024-07-22 15:05:36.344557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:52:56.828 [2024-07-22 15:05:36.344573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.828 [2024-07-22 15:05:36.344583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:52:56.828 [2024-07-22 15:05:36.344608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.828 [2024-07-22 15:05:36.344619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:52:56.828 [2024-07-22 15:05:36.344635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.828 [2024-07-22 15:05:36.344645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:52:56.828 [2024-07-22 15:05:36.344661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.828 [2024-07-22 15:05:36.344682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:52:56.828 [2024-07-22 15:05:36.344698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.828 [2024-07-22 15:05:36.344708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:52:56.828 [2024-07-22 15:05:36.344724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.828 [2024-07-22 15:05:36.344739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:52:56.828 [2024-07-22 15:05:36.344755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.828 [2024-07-22 15:05:36.344766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:52:56.828 [2024-07-22 15:05:36.344782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.828 [2024-07-22 15:05:36.344792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:52:56.828 [2024-07-22 15:05:36.344808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.828 [2024-07-22 15:05:36.344818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:52:56.828 [2024-07-22 15:05:36.344834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.828 [2024-07-22 15:05:36.344844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:52:56.828 [2024-07-22 15:05:36.344861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.828 [2024-07-22 15:05:36.344871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:52:56.828 [2024-07-22 15:05:36.344887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.828 [2024-07-22 15:05:36.344897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:52:56.828 [2024-07-22 15:05:36.344913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.828 [2024-07-22 15:05:36.344923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:52:56.828 [2024-07-22 15:05:36.344939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.828 [2024-07-22 15:05:36.344949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:52:56.828 [2024-07-22 15:05:36.344966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.828 [2024-07-22 15:05:36.344976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:52:56.828 [2024-07-22 15:05:36.344992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.828 [2024-07-22 15:05:36.345002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:52:56.828 [2024-07-22 15:05:36.345018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.828 [2024-07-22 15:05:36.345028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:52:56.828 [2024-07-22 15:05:36.345044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.828 [2024-07-22 15:05:36.345055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:52:56.828 [2024-07-22 15:05:36.345076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.828 [2024-07-22 15:05:36.345087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:52:56.828 [2024-07-22 15:05:36.345103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.828 [2024-07-22 15:05:36.345113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:52:56.828 [2024-07-22 15:05:36.345129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.828 [2024-07-22 15:05:36.345139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:52:56.828 [2024-07-22 15:05:36.345155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.828 [2024-07-22 15:05:36.345166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:52:56.828 [2024-07-22 15:05:36.345182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.828 [2024-07-22 15:05:36.345192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.828 [2024-07-22 15:05:36.345209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.829 [2024-07-22 15:05:36.345219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:56.829 [2024-07-22 15:05:36.345234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.829 [2024-07-22 15:05:36.345245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:56.829 [2024-07-22 15:05:36.345261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.829 [2024-07-22 15:05:36.345271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:52:56.829 [2024-07-22 15:05:36.345287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.829 [2024-07-22 15:05:36.345297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:52:56.829 [2024-07-22 15:05:36.345313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.829 [2024-07-22 15:05:36.345323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:52:56.829 [2024-07-22 15:05:36.345339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.829 [2024-07-22 15:05:36.345349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:52:56.829 [2024-07-22 15:05:36.345365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.829 [2024-07-22 15:05:36.345376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:52:56.829 [2024-07-22 15:05:36.345392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.829 [2024-07-22 15:05:36.345406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:52:56.829 [2024-07-22 15:05:36.345422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.829 [2024-07-22 15:05:36.345432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:52:56.829 [2024-07-22 15:05:36.345448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.829 [2024-07-22 15:05:36.345459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:52:56.829 [2024-07-22 15:05:36.345475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.829 [2024-07-22 15:05:36.345485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:52:56.829 [2024-07-22 15:05:36.345501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.829 [2024-07-22 15:05:36.345511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:52:56.829 [2024-07-22 15:05:36.345527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.829 [2024-07-22 15:05:36.345537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:52:56.829 [2024-07-22 15:05:36.345553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.829 [2024-07-22 15:05:36.345564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:52:56.829 [2024-07-22 15:05:36.345580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.829 [2024-07-22 15:05:36.345590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:52:56.829 [2024-07-22 15:05:36.346104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.829 [2024-07-22 15:05:36.346127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:52:56.829 [2024-07-22 15:05:36.346146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.829 [2024-07-22 15:05:36.346157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:52:56.829 [2024-07-22 15:05:36.346174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.829 [2024-07-22 15:05:36.346184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:52:56.829 [2024-07-22 15:05:36.346200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.829 [2024-07-22 15:05:36.346211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:52:56.829 [2024-07-22 15:05:36.346227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.829 [2024-07-22 15:05:36.346244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:52:56.829 [2024-07-22 15:05:36.346260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.829 [2024-07-22 15:05:36.346270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:52:56.829 [2024-07-22 15:05:36.346286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.829 [2024-07-22 15:05:36.346296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:52:56.829 [2024-07-22 15:05:36.346312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.829 [2024-07-22 15:05:36.346323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:52:56.829 [2024-07-22 15:05:36.346338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.829 [2024-07-22 15:05:36.346349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:52:56.829 [2024-07-22 15:05:36.346365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.829 [2024-07-22 15:05:36.346375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:52:56.829 [2024-07-22 15:05:36.346391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.829 [2024-07-22 15:05:36.346401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:52:56.829 [2024-07-22 15:05:36.346417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.829 [2024-07-22 15:05:36.346428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:52:56.829 [2024-07-22 15:05:36.346443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.829 [2024-07-22 15:05:36.346454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:52:56.829 [2024-07-22 15:05:36.346470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.829 [2024-07-22 15:05:36.346480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:52:56.829 [2024-07-22 15:05:36.346496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.829 [2024-07-22 15:05:36.346506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:52:56.829 [2024-07-22 15:05:36.346522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.829 [2024-07-22 15:05:36.346533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:52:56.829 [2024-07-22 15:05:36.346549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.829 [2024-07-22 15:05:36.346559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:52:56.829 [2024-07-22 15:05:36.346579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.829 [2024-07-22 15:05:36.346589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:52:56.829 [2024-07-22 15:05:36.346606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.829 [2024-07-22 15:05:36.346616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:56.829 [2024-07-22 15:05:36.346632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.829 [2024-07-22 15:05:36.346642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:52:56.829 [2024-07-22 15:05:36.346658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.829 [2024-07-22 15:05:36.346677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:52:56.829 [2024-07-22 15:05:36.346695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.829 [2024-07-22 15:05:36.346705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:52:56.829 [2024-07-22 15:05:36.346721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.829 [2024-07-22 15:05:36.346731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:52:56.829 [2024-07-22 15:05:36.346747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.829 [2024-07-22 15:05:36.346757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:52:56.829 [2024-07-22 15:05:36.346774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.830 [2024-07-22 15:05:36.346784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:52:56.830 [2024-07-22 15:05:36.346800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:130896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.830 [2024-07-22 15:05:36.346810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:52:56.830 [2024-07-22 15:05:36.346826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:130904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.830 [2024-07-22 15:05:36.346836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:52:56.830 [2024-07-22 15:05:36.346852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:130912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.830 [2024-07-22 15:05:36.346862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:52:56.830 [2024-07-22 15:05:36.346878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:130920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.830 [2024-07-22 15:05:36.346888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:52:56.830 [2024-07-22 15:05:36.346912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:130928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.830 [2024-07-22 15:05:36.346922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:52:56.830 [2024-07-22 15:05:36.346938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:130936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.830 [2024-07-22 15:05:36.346948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:52:56.830 [2024-07-22 15:05:36.346964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:130944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.830 [2024-07-22 15:05:36.346975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:52:56.830 [2024-07-22 15:05:36.346991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:130952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.830 [2024-07-22 15:05:36.347001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:52:56.830 [2024-07-22 15:05:36.347017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.830 [2024-07-22 15:05:36.347027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:52:56.830 [2024-07-22 15:05:36.347043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:130960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.830 [2024-07-22 15:05:36.347053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:52:56.830 [2024-07-22 15:05:36.347069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:130968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.830 [2024-07-22 15:05:36.347080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:52:56.830 [2024-07-22 15:05:36.347096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:130976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.830 [2024-07-22 15:05:36.347106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:52:56.830 [2024-07-22 15:05:36.347122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:130984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.830 [2024-07-22 15:05:36.347132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:52:56.830 [2024-07-22 15:05:36.347148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:130992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.830 [2024-07-22 15:05:36.347158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:52:56.830 [2024-07-22 15:05:36.347174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:131000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.830 [2024-07-22 15:05:36.347184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:52:56.830 [2024-07-22 15:05:36.347200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:131008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.830 [2024-07-22 15:05:36.347210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:52:56.830 [2024-07-22 15:05:36.347227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:131016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.830 [2024-07-22 15:05:36.347241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:52:56.830 [2024-07-22 15:05:36.347257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:131024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.830 [2024-07-22 15:05:36.347268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:52:56.830 [2024-07-22 15:05:36.347284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:131032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.830 [2024-07-22 15:05:36.347294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:52:56.830 [2024-07-22 15:05:36.347310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:131040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.830 [2024-07-22 15:05:36.347320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:52:56.830 [2024-07-22 15:05:36.347336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:131048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.830 [2024-07-22 15:05:36.347346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:52:56.830 [2024-07-22 15:05:36.347362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:131056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.830 [2024-07-22 15:05:36.347372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:52:56.830 [2024-07-22 15:05:36.347388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:131064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.830 [2024-07-22 15:05:36.347399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:52:56.830 [2024-07-22 15:05:36.347415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:0 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.830 [2024-07-22 15:05:36.347425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:52:56.830 [2024-07-22 15:05:36.347441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:8 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.830 [2024-07-22 15:05:36.347451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:52:56.830 [2024-07-22 15:05:36.347467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.830 [2024-07-22 15:05:36.347477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:56.830 [2024-07-22 15:05:36.347494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:24 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.830 [2024-07-22 15:05:36.347504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:52:56.830 [2024-07-22 15:05:36.347520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:32 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.830 [2024-07-22 15:05:36.347530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:52:56.830 [2024-07-22 15:05:36.347546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:40 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.830 [2024-07-22 15:05:36.347560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:52:56.830 [2024-07-22 15:05:36.347576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:48 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.830 [2024-07-22 15:05:36.347587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:52:56.830 [2024-07-22 15:05:36.347603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.830 [2024-07-22 15:05:36.347613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:52:56.830 [2024-07-22 15:05:36.347628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.830 [2024-07-22 15:05:36.347639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:52:56.830 [2024-07-22 15:05:36.347655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.830 [2024-07-22 15:05:36.347665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:52:56.830 [2024-07-22 15:05:36.347689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.830 [2024-07-22 15:05:36.347700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:52:56.830 [2024-07-22 15:05:36.347724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.830 [2024-07-22 15:05:36.347733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:52:56.830 [2024-07-22 15:05:36.347746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.830 [2024-07-22 15:05:36.347754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:52:56.830 [2024-07-22 15:05:36.347767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:56 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.830 [2024-07-22 15:05:36.347776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:52:56.830 [2024-07-22 15:05:36.347789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:64 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.831 [2024-07-22 15:05:36.347798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:52:56.831 [2024-07-22 15:05:36.347811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:72 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.831 [2024-07-22 15:05:36.347819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:52:56.831 [2024-07-22 15:05:36.347832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:80 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.831 [2024-07-22 15:05:36.347840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:52:56.831 [2024-07-22 15:05:36.347854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:88 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.831 [2024-07-22 15:05:36.347862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:52:56.831 [2024-07-22 15:05:36.348334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.831 [2024-07-22 15:05:36.348348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:52:56.831 [2024-07-22 15:05:36.348363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.831 [2024-07-22 15:05:36.348372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:52:56.831 [2024-07-22 15:05:36.348385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.831 [2024-07-22 15:05:36.348393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:52:56.831 [2024-07-22 15:05:36.348406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.831 [2024-07-22 15:05:36.348415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:52:56.831 [2024-07-22 15:05:36.348428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.831 [2024-07-22 15:05:36.348436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:52:56.831 [2024-07-22 15:05:36.348449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.831 [2024-07-22 15:05:36.348458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:52:56.831 [2024-07-22 15:05:36.348471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.831 [2024-07-22 15:05:36.348479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:52:56.831 [2024-07-22 15:05:36.348492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.831 [2024-07-22 15:05:36.348500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:52:56.831 [2024-07-22 15:05:36.348514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.831 [2024-07-22 15:05:36.348522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:52:56.831 [2024-07-22 15:05:36.348535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.831 [2024-07-22 15:05:36.348544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:52:56.831 [2024-07-22 15:05:36.348557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.831 [2024-07-22 15:05:36.348565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:52:56.831 [2024-07-22 15:05:36.348578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.831 [2024-07-22 15:05:36.348586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:52:56.831 [2024-07-22 15:05:36.348613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.831 [2024-07-22 15:05:36.348622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:52:56.831 [2024-07-22 15:05:36.348636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.831 [2024-07-22 15:05:36.348644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:52:56.831 [2024-07-22 15:05:36.348657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.831 [2024-07-22 15:05:36.348674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:52:56.831 [2024-07-22 15:05:36.348688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.831 [2024-07-22 15:05:36.348696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:52:56.831 [2024-07-22 15:05:36.348709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.831 [2024-07-22 15:05:36.348718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:56.831 [2024-07-22 15:05:36.348731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.831 [2024-07-22 15:05:36.348739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:52:56.831 [2024-07-22 15:05:36.348752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.831 [2024-07-22 15:05:36.348760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:52:56.831 [2024-07-22 15:05:36.348774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.831 [2024-07-22 15:05:36.348782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:52:56.831 [2024-07-22 15:05:36.348795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.831 [2024-07-22 15:05:36.348804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:52:56.831 [2024-07-22 15:05:36.348816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.831 [2024-07-22 15:05:36.348825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:52:56.831 [2024-07-22 15:05:36.348838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.831 [2024-07-22 15:05:36.348846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:52:56.831 [2024-07-22 15:05:36.348859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.831 [2024-07-22 15:05:36.348867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:52:56.831 [2024-07-22 15:05:36.348880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.831 [2024-07-22 15:05:36.348892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:52:56.831 [2024-07-22 15:05:36.348905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.831 [2024-07-22 15:05:36.348914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:52:56.831 [2024-07-22 15:05:36.348927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.831 [2024-07-22 15:05:36.348935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:52:56.831 [2024-07-22 15:05:36.348948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.831 [2024-07-22 15:05:36.348956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:52:56.831 [2024-07-22 15:05:36.348969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.831 [2024-07-22 15:05:36.348977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:52:56.831 [2024-07-22 15:05:36.348990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.831 [2024-07-22 15:05:36.348999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:52:56.831 [2024-07-22 15:05:36.349012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.831 [2024-07-22 15:05:36.349020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:52:56.831 [2024-07-22 15:05:36.349033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.831 [2024-07-22 15:05:36.349041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:52:56.831 [2024-07-22 15:05:36.349054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.831 [2024-07-22 15:05:36.349063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:52:56.831 [2024-07-22 15:05:36.349075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.831 [2024-07-22 15:05:36.349084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:52:56.831 [2024-07-22 15:05:36.349097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.831 [2024-07-22 15:05:36.349105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:52:56.831 [2024-07-22 15:05:36.349118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.831 [2024-07-22 15:05:36.349126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:52:56.832 [2024-07-22 15:05:36.349139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.832 [2024-07-22 15:05:36.349147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:52:56.832 [2024-07-22 15:05:36.349164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.832 [2024-07-22 15:05:36.349173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:52:56.832 [2024-07-22 15:05:36.349186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.832 [2024-07-22 15:05:36.349194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:52:56.832 [2024-07-22 15:05:36.349207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.832 [2024-07-22 15:05:36.349215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:52:56.832 [2024-07-22 15:05:36.349228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.832 [2024-07-22 15:05:36.349237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:52:56.832 [2024-07-22 15:05:36.349250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.832 [2024-07-22 15:05:36.349258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:52:56.832 [2024-07-22 15:05:36.349271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.832 [2024-07-22 15:05:36.349279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:52:56.832 [2024-07-22 15:05:36.349292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.832 [2024-07-22 15:05:36.349301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:52:56.832 [2024-07-22 15:05:36.349314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.832 [2024-07-22 15:05:36.349322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:52:56.832 [2024-07-22 15:05:36.349335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.832 [2024-07-22 15:05:36.349343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:52:56.832 [2024-07-22 15:05:36.349356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.832 [2024-07-22 15:05:36.349364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.832 [2024-07-22 15:05:36.349377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.832 [2024-07-22 15:05:36.349385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:56.832 [2024-07-22 15:05:36.349398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.832 [2024-07-22 15:05:36.349406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:56.832 [2024-07-22 15:05:36.349422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.832 [2024-07-22 15:05:36.349431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:52:56.832 [2024-07-22 15:05:36.349443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.832 [2024-07-22 15:05:36.349452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:52:56.832 [2024-07-22 15:05:36.349464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.832 [2024-07-22 15:05:36.349473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:52:56.832 [2024-07-22 15:05:36.349485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.832 [2024-07-22 15:05:36.349493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:52:56.832 [2024-07-22 15:05:36.349507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.832 [2024-07-22 15:05:36.349515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:52:56.832 [2024-07-22 15:05:36.349528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.832 [2024-07-22 15:05:36.349536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:52:56.832 [2024-07-22 15:05:36.349549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.832 [2024-07-22 15:05:36.349557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:52:56.832 [2024-07-22 15:05:36.349570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.832 [2024-07-22 15:05:36.349579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:52:56.832 [2024-07-22 15:05:36.349591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.832 [2024-07-22 15:05:36.349599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:52:56.832 [2024-07-22 15:05:36.349612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.832 [2024-07-22 15:05:36.349621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:52:56.832 [2024-07-22 15:05:36.349634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.832 [2024-07-22 15:05:36.349642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:52:56.832 [2024-07-22 15:05:36.349655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.832 [2024-07-22 15:05:36.349663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:52:56.832 [2024-07-22 15:05:36.350081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.832 [2024-07-22 15:05:36.350100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:52:56.832 [2024-07-22 15:05:36.350114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.832 [2024-07-22 15:05:36.350123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:52:56.832 [2024-07-22 15:05:36.350136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.832 [2024-07-22 15:05:36.350144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:52:56.832 [2024-07-22 15:05:36.350157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.832 [2024-07-22 15:05:36.350166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:52:56.832 [2024-07-22 15:05:36.350179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.832 [2024-07-22 15:05:36.350187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:52:56.832 [2024-07-22 15:05:36.350200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.832 [2024-07-22 15:05:36.350208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:52:56.832 [2024-07-22 15:05:36.350221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.832 [2024-07-22 15:05:36.350230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:52:56.832 [2024-07-22 15:05:36.350242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.833 [2024-07-22 15:05:36.350251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:52:56.833 [2024-07-22 15:05:36.350264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.833 [2024-07-22 15:05:36.350272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:52:56.833 [2024-07-22 15:05:36.350285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.833 [2024-07-22 15:05:36.350294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:52:56.833 [2024-07-22 15:05:36.350307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.833 [2024-07-22 15:05:36.350315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:52:56.833 [2024-07-22 15:05:36.350328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.833 [2024-07-22 15:05:36.350337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:52:56.833 [2024-07-22 15:05:36.350350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.833 [2024-07-22 15:05:36.350361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:52:56.833 [2024-07-22 15:05:36.350375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.833 [2024-07-22 15:05:36.350383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:52:56.833 [2024-07-22 15:05:36.350396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.833 [2024-07-22 15:05:36.350405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:52:56.833 [2024-07-22 15:05:36.350418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.833 [2024-07-22 15:05:36.350426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:52:56.833 [2024-07-22 15:05:36.350439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.833 [2024-07-22 15:05:36.350447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:52:56.833 [2024-07-22 15:05:36.350461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.833 [2024-07-22 15:05:36.350469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:52:56.833 [2024-07-22 15:05:36.350482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.833 [2024-07-22 15:05:36.350490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:52:56.833 [2024-07-22 15:05:36.350503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.833 [2024-07-22 15:05:36.350511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:56.833 [2024-07-22 15:05:36.350524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.833 [2024-07-22 15:05:36.350533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:52:56.833 [2024-07-22 15:05:36.350546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.833 [2024-07-22 15:05:36.350554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:52:56.833 [2024-07-22 15:05:36.350567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.833 [2024-07-22 15:05:36.350575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:52:56.833 [2024-07-22 15:05:36.350588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.833 [2024-07-22 15:05:36.350597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:52:56.833 [2024-07-22 15:05:36.350610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.833 [2024-07-22 15:05:36.350618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:52:56.833 [2024-07-22 15:05:36.350634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.833 [2024-07-22 15:05:36.350643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:52:56.833 [2024-07-22 15:05:36.350656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:130896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.833 [2024-07-22 15:05:36.350665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:52:56.833 [2024-07-22 15:05:36.350688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:130904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.833 [2024-07-22 15:05:36.350696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:52:56.833 [2024-07-22 15:05:36.350709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:130912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.833 [2024-07-22 15:05:36.350717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:52:56.833 [2024-07-22 15:05:36.350730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:130920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.833 [2024-07-22 15:05:36.350738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:52:56.833 [2024-07-22 15:05:36.350751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:130928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.833 [2024-07-22 15:05:36.350759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:52:56.833 [2024-07-22 15:05:36.350772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:130936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.833 [2024-07-22 15:05:36.350780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:52:56.833 [2024-07-22 15:05:36.350793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:130944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.833 [2024-07-22 15:05:36.350801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:52:56.833 [2024-07-22 15:05:36.350815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:130952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.833 [2024-07-22 15:05:36.350823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:52:56.833 [2024-07-22 15:05:36.350835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.833 [2024-07-22 15:05:36.350844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:52:56.833 [2024-07-22 15:05:36.350857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:130960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.833 [2024-07-22 15:05:36.350865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:52:56.833 [2024-07-22 15:05:36.350878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:130968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.833 [2024-07-22 15:05:36.350886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:52:56.833 [2024-07-22 15:05:36.350903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:130976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.833 [2024-07-22 15:05:36.350911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:52:56.833 [2024-07-22 15:05:36.350924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:130984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.833 [2024-07-22 15:05:36.350932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:52:56.833 [2024-07-22 15:05:36.350945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:130992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.833 [2024-07-22 15:05:36.350954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:52:56.833 [2024-07-22 15:05:36.350967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:131000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.833 [2024-07-22 15:05:36.350975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:52:56.833 [2024-07-22 15:05:36.350988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:131008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.833 [2024-07-22 15:05:36.350996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:52:56.833 [2024-07-22 15:05:36.351009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:131016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.833 [2024-07-22 15:05:36.351017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:52:56.833 [2024-07-22 15:05:36.351030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:131024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.833 [2024-07-22 15:05:36.351038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:52:56.833 [2024-07-22 15:05:36.351052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:131032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.833 [2024-07-22 15:05:36.351060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:52:56.833 [2024-07-22 15:05:36.351073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:131040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.834 [2024-07-22 15:05:36.351082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:52:56.834 [2024-07-22 15:05:36.351094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:131048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.834 [2024-07-22 15:05:36.351102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:52:56.834 [2024-07-22 15:05:36.351115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:131056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.834 [2024-07-22 15:05:36.351124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:52:56.834 [2024-07-22 15:05:36.351137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:131064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.834 [2024-07-22 15:05:36.351145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:52:56.834 [2024-07-22 15:05:36.351158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:0 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.834 [2024-07-22 15:05:36.351170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:52:56.834 [2024-07-22 15:05:36.351183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.834 [2024-07-22 15:05:36.351191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:52:56.834 [2024-07-22 15:05:36.351204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.834 [2024-07-22 15:05:36.351212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:56.834 [2024-07-22 15:05:36.351225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:24 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.834 [2024-07-22 15:05:36.351233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:52:56.834 [2024-07-22 15:05:36.351246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:32 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.834 [2024-07-22 15:05:36.351255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:52:56.834 [2024-07-22 15:05:36.351267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:40 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.834 [2024-07-22 15:05:36.351275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:52:56.834 [2024-07-22 15:05:36.351289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:48 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.834 [2024-07-22 15:05:36.351297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:52:56.834 [2024-07-22 15:05:36.351310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.834 [2024-07-22 15:05:36.351318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:52:56.834 [2024-07-22 15:05:36.351331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.834 [2024-07-22 15:05:36.351339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:52:56.834 [2024-07-22 15:05:36.351352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.834 [2024-07-22 15:05:36.351361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:52:56.834 [2024-07-22 15:05:36.351374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.834 [2024-07-22 15:05:36.351382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:52:56.834 [2024-07-22 15:05:36.351395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.834 [2024-07-22 15:05:36.351403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:52:56.834 [2024-07-22 15:05:36.351416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.834 [2024-07-22 15:05:36.351427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:52:56.834 [2024-07-22 15:05:36.351441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:56 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.834 [2024-07-22 15:05:36.351449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:52:56.834 [2024-07-22 15:05:36.351462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:64 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.834 [2024-07-22 15:05:36.351470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:52:56.834 [2024-07-22 15:05:36.351483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:72 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.834 [2024-07-22 15:05:36.351491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:52:56.834 [2024-07-22 15:05:36.351505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:80 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.834 [2024-07-22 15:05:36.351513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:52:56.834 [2024-07-22 15:05:36.351989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:88 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.834 [2024-07-22 15:05:36.352003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:52:56.834 [2024-07-22 15:05:36.352017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:96 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.834 [2024-07-22 15:05:36.352026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:52:56.834 [2024-07-22 15:05:36.352039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.834 [2024-07-22 15:05:36.352048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:52:56.834 [2024-07-22 15:05:36.352061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.834 [2024-07-22 15:05:36.352069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:52:56.834 [2024-07-22 15:05:36.352082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.834 [2024-07-22 15:05:36.352090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:52:56.834 [2024-07-22 15:05:36.352103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.834 [2024-07-22 15:05:36.352111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:52:56.834 [2024-07-22 15:05:36.352124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.834 [2024-07-22 15:05:36.352133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:52:56.834 [2024-07-22 15:05:36.352146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.834 [2024-07-22 15:05:36.352154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:52:56.834 [2024-07-22 15:05:36.352173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.834 [2024-07-22 15:05:36.352181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:52:56.834 [2024-07-22 15:05:36.352194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.834 [2024-07-22 15:05:36.352203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:52:56.834 [2024-07-22 15:05:36.352216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.834 [2024-07-22 15:05:36.352224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:52:56.834 [2024-07-22 15:05:36.352237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.834 [2024-07-22 15:05:36.352245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:52:56.834 [2024-07-22 15:05:36.352258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.834 [2024-07-22 15:05:36.352267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:52:56.834 [2024-07-22 15:05:36.352280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.834 [2024-07-22 15:05:36.352288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:52:56.834 [2024-07-22 15:05:36.352301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.834 [2024-07-22 15:05:36.352309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:52:56.834 [2024-07-22 15:05:36.352322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.834 [2024-07-22 15:05:36.352331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:52:56.834 [2024-07-22 15:05:36.352343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.834 [2024-07-22 15:05:36.352351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:52:56.834 [2024-07-22 15:05:36.352364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.834 [2024-07-22 15:05:36.352373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:56.835 [2024-07-22 15:05:36.352385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.835 [2024-07-22 15:05:36.352394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:52:56.835 [2024-07-22 15:05:36.352407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.835 [2024-07-22 15:05:36.352415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:52:56.835 [2024-07-22 15:05:36.352428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.835 [2024-07-22 15:05:36.352441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:52:56.835 [2024-07-22 15:05:36.352454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.835 [2024-07-22 15:05:36.352462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:52:56.835 [2024-07-22 15:05:36.352475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.835 [2024-07-22 15:05:36.352484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:52:56.835 [2024-07-22 15:05:36.352497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.835 [2024-07-22 15:05:36.352505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:52:56.835 [2024-07-22 15:05:36.352519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.835 [2024-07-22 15:05:36.352527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:52:56.835 [2024-07-22 15:05:36.352540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.835 [2024-07-22 15:05:36.352549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:52:56.835 [2024-07-22 15:05:36.352561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.835 [2024-07-22 15:05:36.352570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:52:56.835 [2024-07-22 15:05:36.352583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.835 [2024-07-22 15:05:36.352591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:52:56.835 [2024-07-22 15:05:36.352611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.835 [2024-07-22 15:05:36.352620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:52:56.835 [2024-07-22 15:05:36.352633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.835 [2024-07-22 15:05:36.352641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:52:56.835 [2024-07-22 15:05:36.352654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.835 [2024-07-22 15:05:36.352663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:52:56.835 [2024-07-22 15:05:36.352684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.835 [2024-07-22 15:05:36.352693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:52:56.835 [2024-07-22 15:05:36.352706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.835 [2024-07-22 15:05:36.352718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:52:56.835 [2024-07-22 15:05:36.352732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.835 [2024-07-22 15:05:36.352740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:52:56.835 [2024-07-22 15:05:36.352753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.835 [2024-07-22 15:05:36.352761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:52:56.835 [2024-07-22 15:05:36.352774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.835 [2024-07-22 15:05:36.352782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:52:56.835 [2024-07-22 15:05:36.352796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.835 [2024-07-22 15:05:36.352804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:52:56.835 [2024-07-22 15:05:36.352817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.835 [2024-07-22 15:05:36.352825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:52:56.835 [2024-07-22 15:05:36.352838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.835 [2024-07-22 15:05:36.352846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:52:56.835 [2024-07-22 15:05:36.352859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.835 [2024-07-22 15:05:36.352867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:52:56.835 [2024-07-22 15:05:36.352880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.835 [2024-07-22 15:05:36.352889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:52:56.835 [2024-07-22 15:05:36.352902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.835 [2024-07-22 15:05:36.352910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:52:56.835 [2024-07-22 15:05:36.352923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.836 [2024-07-22 15:05:36.352931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:52:56.836 [2024-07-22 15:05:36.352944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.836 [2024-07-22 15:05:36.352952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:52:56.836 [2024-07-22 15:05:36.352965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.836 [2024-07-22 15:05:36.352973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:52:56.836 [2024-07-22 15:05:36.352990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.836 [2024-07-22 15:05:36.352998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:52:56.836 [2024-07-22 15:05:36.353011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.836 [2024-07-22 15:05:36.353020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:52:56.836 [2024-07-22 15:05:36.353032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.836 [2024-07-22 15:05:36.353041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.836 [2024-07-22 15:05:36.353054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.836 [2024-07-22 15:05:36.353062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:56.836 [2024-07-22 15:05:36.353075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.836 [2024-07-22 15:05:36.353083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:56.836 [2024-07-22 15:05:36.353096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.836 [2024-07-22 15:05:36.353104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:52:56.836 [2024-07-22 15:05:36.353117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.836 [2024-07-22 15:05:36.353125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:52:56.836 [2024-07-22 15:05:36.353138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.836 [2024-07-22 15:05:36.353146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:52:56.836 [2024-07-22 15:05:36.353159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.836 [2024-07-22 15:05:36.353167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:52:56.836 [2024-07-22 15:05:36.353180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.836 [2024-07-22 15:05:36.353189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:52:56.836 [2024-07-22 15:05:36.353201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.836 [2024-07-22 15:05:36.353210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:52:56.836 [2024-07-22 15:05:36.353222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.836 [2024-07-22 15:05:36.353231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:52:56.836 [2024-07-22 15:05:36.353247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.836 [2024-07-22 15:05:36.353256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:52:56.836 [2024-07-22 15:05:36.353269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.836 [2024-07-22 15:05:36.353277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:52:56.836 [2024-07-22 15:05:36.353290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.836 [2024-07-22 15:05:36.353298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:52:56.836 [2024-07-22 15:05:36.353311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.836 [2024-07-22 15:05:36.353319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:52:56.836 [2024-07-22 15:05:36.353725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.836 [2024-07-22 15:05:36.353738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:52:56.836 [2024-07-22 15:05:36.353753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.836 [2024-07-22 15:05:36.353762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:52:56.836 [2024-07-22 15:05:36.353774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.836 [2024-07-22 15:05:36.353783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:52:56.836 [2024-07-22 15:05:36.353796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.836 [2024-07-22 15:05:36.353804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:52:56.836 [2024-07-22 15:05:36.353817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.836 [2024-07-22 15:05:36.353825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:52:56.836 [2024-07-22 15:05:36.353838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.836 [2024-07-22 15:05:36.353847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:52:56.836 [2024-07-22 15:05:36.353860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.836 [2024-07-22 15:05:36.353868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:52:56.836 [2024-07-22 15:05:36.353881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.836 [2024-07-22 15:05:36.353889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:52:56.836 [2024-07-22 15:05:36.353902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.836 [2024-07-22 15:05:36.353915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:52:56.836 [2024-07-22 15:05:36.353928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.836 [2024-07-22 15:05:36.353937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:52:56.836 [2024-07-22 15:05:36.353949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.836 [2024-07-22 15:05:36.353958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:52:56.836 [2024-07-22 15:05:36.353971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.836 [2024-07-22 15:05:36.353979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:52:56.836 [2024-07-22 15:05:36.353992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.836 [2024-07-22 15:05:36.354000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:52:56.836 [2024-07-22 15:05:36.354013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.836 [2024-07-22 15:05:36.354021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:52:56.836 [2024-07-22 15:05:36.354034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.836 [2024-07-22 15:05:36.354042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:52:56.836 [2024-07-22 15:05:36.354056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.837 [2024-07-22 15:05:36.354064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:52:56.837 [2024-07-22 15:05:36.354076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.837 [2024-07-22 15:05:36.354085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:52:56.837 [2024-07-22 15:05:36.354098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.837 [2024-07-22 15:05:36.354106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:52:56.837 [2024-07-22 15:05:36.354119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.837 [2024-07-22 15:05:36.354127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:52:56.837 [2024-07-22 15:05:36.354140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.837 [2024-07-22 15:05:36.354148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:52:56.837 [2024-07-22 15:05:36.354161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.837 [2024-07-22 15:05:36.354169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:52:56.837 [2024-07-22 15:05:36.354186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.837 [2024-07-22 15:05:36.354194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:52:56.837 [2024-07-22 15:05:36.354207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.837 [2024-07-22 15:05:36.354215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:52:56.837 [2024-07-22 15:05:36.354228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.837 [2024-07-22 15:05:36.354236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:52:56.837 [2024-07-22 15:05:36.354249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.837 [2024-07-22 15:05:36.354257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:52:56.837 [2024-07-22 15:05:36.354270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.837 [2024-07-22 15:05:36.354278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:52:56.837 [2024-07-22 15:05:36.354291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.837 [2024-07-22 15:05:36.354299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:52:56.837 [2024-07-22 15:05:36.354312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:130896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.837 [2024-07-22 15:05:36.354321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:52:56.837 [2024-07-22 15:05:36.354333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:130904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.837 [2024-07-22 15:05:36.354342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:52:56.837 [2024-07-22 15:05:36.354355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:130912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.837 [2024-07-22 15:05:36.354363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:52:56.837 [2024-07-22 15:05:36.354376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:130920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.837 [2024-07-22 15:05:36.354384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:52:56.837 [2024-07-22 15:05:36.354397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:130928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.837 [2024-07-22 15:05:36.354405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:52:56.837 [2024-07-22 15:05:36.354418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:130936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.837 [2024-07-22 15:05:36.354426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:52:56.837 [2024-07-22 15:05:36.354443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:130944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.837 [2024-07-22 15:05:36.354452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:52:56.837 [2024-07-22 15:05:36.354464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:130952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.837 [2024-07-22 15:05:36.354473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:52:56.837 [2024-07-22 15:05:36.354486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.837 [2024-07-22 15:05:36.354494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:52:56.837 [2024-07-22 15:05:36.354507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:130960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.837 [2024-07-22 15:05:36.354515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:52:56.837 [2024-07-22 15:05:36.354528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:130968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.837 [2024-07-22 15:05:36.354537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:52:56.837 [2024-07-22 15:05:36.354549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:130976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.837 [2024-07-22 15:05:36.354557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:52:56.837 [2024-07-22 15:05:36.354570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:130984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.837 [2024-07-22 15:05:36.354578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:52:56.837 [2024-07-22 15:05:36.354591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:130992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.837 [2024-07-22 15:05:36.354599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:52:56.837 [2024-07-22 15:05:36.354612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:131000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.837 [2024-07-22 15:05:36.354620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:52:56.837 [2024-07-22 15:05:36.354633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:131008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.837 [2024-07-22 15:05:36.354642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:52:56.837 [2024-07-22 15:05:36.354655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:131016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.837 [2024-07-22 15:05:36.354663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:52:56.837 [2024-07-22 15:05:36.354685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:131024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.837 [2024-07-22 15:05:36.354693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:52:56.837 [2024-07-22 15:05:36.354710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:131032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.837 [2024-07-22 15:05:36.354719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:52:56.837 [2024-07-22 15:05:36.354732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:131040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.837 [2024-07-22 15:05:36.354740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:52:56.837 [2024-07-22 15:05:36.354753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:131048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.838 [2024-07-22 15:05:36.354762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:52:56.838 [2024-07-22 15:05:36.354774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:131056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.838 [2024-07-22 15:05:36.354783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:52:56.838 [2024-07-22 15:05:36.354796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:131064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.838 [2024-07-22 15:05:36.354805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:52:56.838 [2024-07-22 15:05:36.354818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:0 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.838 [2024-07-22 15:05:36.354826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:52:56.838 [2024-07-22 15:05:36.354839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:8 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.838 [2024-07-22 15:05:36.354847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:52:56.838 [2024-07-22 15:05:36.354860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.838 [2024-07-22 15:05:36.354868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:56.838 [2024-07-22 15:05:36.354881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:24 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.838 [2024-07-22 15:05:36.354889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:52:56.838 [2024-07-22 15:05:36.354902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:32 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.838 [2024-07-22 15:05:36.354911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:52:56.838 [2024-07-22 15:05:36.354924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:40 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.838 [2024-07-22 15:05:36.354932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:52:56.838 [2024-07-22 15:05:36.354945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:48 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.838 [2024-07-22 15:05:36.354953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:52:56.838 [2024-07-22 15:05:36.354966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.838 [2024-07-22 15:05:36.354978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:52:56.838 [2024-07-22 15:05:36.354991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.838 [2024-07-22 15:05:36.355000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:52:56.838 [2024-07-22 15:05:36.355013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.838 [2024-07-22 15:05:36.355022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:52:56.838 [2024-07-22 15:05:36.355035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.838 [2024-07-22 15:05:36.355043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:52:56.838 [2024-07-22 15:05:36.355056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.838 [2024-07-22 15:05:36.355064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:52:56.838 [2024-07-22 15:05:36.355077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.838 [2024-07-22 15:05:36.355085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:52:56.838 [2024-07-22 15:05:36.355098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:56 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.838 [2024-07-22 15:05:36.355107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:52:56.838 [2024-07-22 15:05:36.355120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:64 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.838 [2024-07-22 15:05:36.355128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:52:56.838 [2024-07-22 15:05:36.355142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:72 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.838 [2024-07-22 15:05:36.355150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:52:56.838 [2024-07-22 15:05:36.355336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:80 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.838 [2024-07-22 15:05:36.355349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:52:56.838 [2024-07-22 15:05:36.355378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:88 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.838 [2024-07-22 15:05:36.355387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:52:56.838 [2024-07-22 15:05:36.355404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:96 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.838 [2024-07-22 15:05:36.355413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:52:56.838 [2024-07-22 15:05:36.355430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.838 [2024-07-22 15:05:36.355438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:52:56.838 [2024-07-22 15:05:36.355461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.838 [2024-07-22 15:05:36.355469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:52:56.838 [2024-07-22 15:05:36.355486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.838 [2024-07-22 15:05:36.355494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:52:56.838 [2024-07-22 15:05:36.355511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.838 [2024-07-22 15:05:36.355519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:52:56.838 [2024-07-22 15:05:36.355536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.838 [2024-07-22 15:05:36.355544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:52:56.838 [2024-07-22 15:05:36.355561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.838 [2024-07-22 15:05:36.355569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:52:56.838 [2024-07-22 15:05:36.355587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.838 [2024-07-22 15:05:36.355595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:52:56.838 [2024-07-22 15:05:36.355612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.838 [2024-07-22 15:05:36.355620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:52:56.838 [2024-07-22 15:05:36.355637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.838 [2024-07-22 15:05:36.355645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:52:56.838 [2024-07-22 15:05:36.355662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.838 [2024-07-22 15:05:36.355680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:52:56.838 [2024-07-22 15:05:36.355697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.838 [2024-07-22 15:05:36.355705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:52:56.838 [2024-07-22 15:05:36.355722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.838 [2024-07-22 15:05:36.355730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:52:56.838 [2024-07-22 15:05:36.355747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.838 [2024-07-22 15:05:36.355755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:52:56.838 [2024-07-22 15:05:36.355776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.838 [2024-07-22 15:05:36.355785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:52:56.838 [2024-07-22 15:05:36.355802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.838 [2024-07-22 15:05:36.355810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:52:56.838 [2024-07-22 15:05:36.355827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.838 [2024-07-22 15:05:36.355835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:56.838 [2024-07-22 15:05:36.355852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.838 [2024-07-22 15:05:36.355860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:52:56.838 [2024-07-22 15:05:36.355877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.838 [2024-07-22 15:05:36.355885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:52:56.839 [2024-07-22 15:05:36.355904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.839 [2024-07-22 15:05:36.355912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:52:56.839 [2024-07-22 15:05:36.355929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.839 [2024-07-22 15:05:36.355937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:52:56.839 [2024-07-22 15:05:36.355954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.839 [2024-07-22 15:05:36.355962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:52:56.839 [2024-07-22 15:05:36.355979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.839 [2024-07-22 15:05:36.355988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:52:56.839 [2024-07-22 15:05:36.356006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.839 [2024-07-22 15:05:36.356014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:52:56.839 [2024-07-22 15:05:36.356031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.839 [2024-07-22 15:05:36.356039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:52:56.839 [2024-07-22 15:05:36.356056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.839 [2024-07-22 15:05:36.356064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:52:56.839 [2024-07-22 15:05:36.356081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.839 [2024-07-22 15:05:36.356093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:52:56.839 [2024-07-22 15:05:36.356110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.839 [2024-07-22 15:05:36.356118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:52:56.839 [2024-07-22 15:05:36.356135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.839 [2024-07-22 15:05:36.356143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:52:56.839 [2024-07-22 15:05:36.356160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.839 [2024-07-22 15:05:36.356168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:52:56.839 [2024-07-22 15:05:36.356185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.839 [2024-07-22 15:05:36.356193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:52:56.839 [2024-07-22 15:05:36.356210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.839 [2024-07-22 15:05:36.356219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:52:56.839 [2024-07-22 15:05:36.356236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.839 [2024-07-22 15:05:36.356244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:52:56.839 [2024-07-22 15:05:36.356261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.839 [2024-07-22 15:05:36.356269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:52:56.839 [2024-07-22 15:05:36.356286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.839 [2024-07-22 15:05:36.356294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:52:56.839 [2024-07-22 15:05:36.356311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.839 [2024-07-22 15:05:36.356320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:52:56.839 [2024-07-22 15:05:36.356336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.839 [2024-07-22 15:05:36.356344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:52:56.839 [2024-07-22 15:05:36.356362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.839 [2024-07-22 15:05:36.356370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:52:56.839 [2024-07-22 15:05:36.356386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.839 [2024-07-22 15:05:36.356398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:52:56.839 [2024-07-22 15:05:36.356415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.839 [2024-07-22 15:05:36.356423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:52:56.839 [2024-07-22 15:05:36.356440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.839 [2024-07-22 15:05:36.356449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:52:56.839 [2024-07-22 15:05:36.356466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.839 [2024-07-22 15:05:36.356474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:52:56.839 [2024-07-22 15:05:36.356491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.839 [2024-07-22 15:05:36.356499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:52:56.839 [2024-07-22 15:05:36.356516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.839 [2024-07-22 15:05:36.356524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:52:56.839 [2024-07-22 15:05:36.356541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.839 [2024-07-22 15:05:36.356550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:52:56.839 [2024-07-22 15:05:36.356566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.839 [2024-07-22 15:05:36.356575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:52:56.839 [2024-07-22 15:05:36.356592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.839 [2024-07-22 15:05:36.356607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.839 [2024-07-22 15:05:36.356624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.839 [2024-07-22 15:05:36.356632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:56.839 [2024-07-22 15:05:36.356649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.839 [2024-07-22 15:05:36.356658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:56.839 [2024-07-22 15:05:36.356682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.839 [2024-07-22 15:05:36.356691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:52:56.839 [2024-07-22 15:05:36.356708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.839 [2024-07-22 15:05:36.356716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:52:56.839 [2024-07-22 15:05:36.356737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.839 [2024-07-22 15:05:36.356746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:52:56.839 [2024-07-22 15:05:36.356762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.839 [2024-07-22 15:05:36.356771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:52:56.839 [2024-07-22 15:05:36.356787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.839 [2024-07-22 15:05:36.356796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:52:56.839 [2024-07-22 15:05:36.356813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.839 [2024-07-22 15:05:36.356821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:52:56.839 [2024-07-22 15:05:36.356838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.839 [2024-07-22 15:05:36.356846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:52:56.839 [2024-07-22 15:05:36.356863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.839 [2024-07-22 15:05:36.356871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:52:56.839 [2024-07-22 15:05:36.356888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.839 [2024-07-22 15:05:36.356896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:52:56.840 [2024-07-22 15:05:36.356913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.840 [2024-07-22 15:05:36.356922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:52:56.840 [2024-07-22 15:05:36.357038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.840 [2024-07-22 15:05:36.357047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:52:56.840 [2024-07-22 15:05:43.053555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.840 [2024-07-22 15:05:43.053615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:52:56.840 [2024-07-22 15:05:43.053660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.840 [2024-07-22 15:05:43.053681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:52:56.840 [2024-07-22 15:05:43.053698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.840 [2024-07-22 15:05:43.053708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:52:56.840 [2024-07-22 15:05:43.053723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.840 [2024-07-22 15:05:43.053753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:52:56.840 [2024-07-22 15:05:43.053769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.840 [2024-07-22 15:05:43.053779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:52:56.840 [2024-07-22 15:05:43.053794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.840 [2024-07-22 15:05:43.053804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:52:56.840 [2024-07-22 15:05:43.053820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.840 [2024-07-22 15:05:43.053840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:52:56.840 [2024-07-22 15:05:43.053854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.840 [2024-07-22 15:05:43.053862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:52:56.840 [2024-07-22 15:05:43.053876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.840 [2024-07-22 15:05:43.053885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:52:56.840 [2024-07-22 15:05:43.053898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.840 [2024-07-22 15:05:43.053907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:52:56.840 [2024-07-22 15:05:43.053921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.840 [2024-07-22 15:05:43.053930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:52:56.840 [2024-07-22 15:05:43.053943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.840 [2024-07-22 15:05:43.053952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:52:56.840 [2024-07-22 15:05:43.053965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.840 [2024-07-22 15:05:43.053974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:52:56.840 [2024-07-22 15:05:43.053988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.840 [2024-07-22 15:05:43.053996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:52:56.840 [2024-07-22 15:05:43.054010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.840 [2024-07-22 15:05:43.054019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:52:56.840 [2024-07-22 15:05:43.054033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.840 [2024-07-22 15:05:43.054049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:52:56.840 [2024-07-22 15:05:43.054943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.840 [2024-07-22 15:05:43.054965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:52:56.840 [2024-07-22 15:05:43.054988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.840 [2024-07-22 15:05:43.054999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:52:56.840 [2024-07-22 15:05:43.055018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.840 [2024-07-22 15:05:43.055029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:52:56.840 [2024-07-22 15:05:43.055049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.840 [2024-07-22 15:05:43.055059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:52:56.840 [2024-07-22 15:05:43.055078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.840 [2024-07-22 15:05:43.055088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:52:56.840 [2024-07-22 15:05:43.055107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.840 [2024-07-22 15:05:43.055117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:52:56.840 [2024-07-22 15:05:43.055135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.840 [2024-07-22 15:05:43.055146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:52:56.840 [2024-07-22 15:05:43.055165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.840 [2024-07-22 15:05:43.055174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:52:56.840 [2024-07-22 15:05:43.055193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.840 [2024-07-22 15:05:43.055204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:52:56.840 [2024-07-22 15:05:43.055223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.840 [2024-07-22 15:05:43.055232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:52:56.840 [2024-07-22 15:05:43.055251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.840 [2024-07-22 15:05:43.055260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:52:56.840 [2024-07-22 15:05:43.055281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.840 [2024-07-22 15:05:43.055291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:52:56.840 [2024-07-22 15:05:43.055320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.840 [2024-07-22 15:05:43.055330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:52:56.840 [2024-07-22 15:05:43.055350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.840 [2024-07-22 15:05:43.055359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:52:56.840 [2024-07-22 15:05:43.055378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.840 [2024-07-22 15:05:43.055388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:52:56.840 [2024-07-22 15:05:43.055407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.840 [2024-07-22 15:05:43.055417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:52:56.840 [2024-07-22 15:05:43.055436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.840 [2024-07-22 15:05:43.055446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:52:56.840 [2024-07-22 15:05:43.055466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.840 [2024-07-22 15:05:43.055476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:52:56.840 [2024-07-22 15:05:43.055494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.840 [2024-07-22 15:05:43.055504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:52:56.840 [2024-07-22 15:05:43.055532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.840 [2024-07-22 15:05:43.055541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:52:56.841 [2024-07-22 15:05:43.055558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.841 [2024-07-22 15:05:43.055566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:52:56.841 [2024-07-22 15:05:43.055583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.841 [2024-07-22 15:05:43.055592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:52:56.841 [2024-07-22 15:05:43.055608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.841 [2024-07-22 15:05:43.055618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:52:56.841 [2024-07-22 15:05:43.055635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.841 [2024-07-22 15:05:43.055643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:52:56.841 [2024-07-22 15:05:43.055664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.841 [2024-07-22 15:05:43.055673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:52:56.841 [2024-07-22 15:05:43.055699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:3960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.841 [2024-07-22 15:05:43.055708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:52:56.841 [2024-07-22 15:05:43.055724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.841 [2024-07-22 15:05:43.055733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:52:56.841 [2024-07-22 15:05:43.055751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.841 [2024-07-22 15:05:43.055760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:52:56.841 [2024-07-22 15:05:43.055776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.841 [2024-07-22 15:05:43.055785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:52:56.841 [2024-07-22 15:05:43.055801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.841 [2024-07-22 15:05:43.055810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:52:56.841 [2024-07-22 15:05:43.055826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.841 [2024-07-22 15:05:43.055835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:52:56.841 [2024-07-22 15:05:43.055852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.841 [2024-07-22 15:05:43.055860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:52:56.841 [2024-07-22 15:05:43.055877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.841 [2024-07-22 15:05:43.055886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:52:56.841 [2024-07-22 15:05:43.055903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.841 [2024-07-22 15:05:43.055912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:52:56.841 [2024-07-22 15:05:43.055929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.841 [2024-07-22 15:05:43.055938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:52:56.841 [2024-07-22 15:05:43.055954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.841 [2024-07-22 15:05:43.055963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:52:56.841 [2024-07-22 15:05:43.055980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.841 [2024-07-22 15:05:43.055992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:52:56.841 [2024-07-22 15:05:43.056008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.841 [2024-07-22 15:05:43.056018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:52:56.841 [2024-07-22 15:05:43.056035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.841 [2024-07-22 15:05:43.056044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:52:56.841 [2024-07-22 15:05:43.056060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.841 [2024-07-22 15:05:43.056069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:52:56.841 [2024-07-22 15:05:43.056085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.841 [2024-07-22 15:05:43.056094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:52:56.841 [2024-07-22 15:05:43.056110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.841 [2024-07-22 15:05:43.056119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:52:56.841 [2024-07-22 15:05:43.056135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.841 [2024-07-22 15:05:43.056144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:52:56.841 [2024-07-22 15:05:43.056249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.841 [2024-07-22 15:05:43.056261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:52:56.841 [2024-07-22 15:05:43.056280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.841 [2024-07-22 15:05:43.056290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:52:56.841 [2024-07-22 15:05:43.056309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.841 [2024-07-22 15:05:43.056319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:52:56.841 [2024-07-22 15:05:43.056338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.841 [2024-07-22 15:05:43.056347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:52:56.841 [2024-07-22 15:05:43.056366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.841 [2024-07-22 15:05:43.056375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:52:56.841 [2024-07-22 15:05:43.056394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.841 [2024-07-22 15:05:43.056408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:52:56.841 [2024-07-22 15:05:43.056428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.841 [2024-07-22 15:05:43.056438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:52:56.841 [2024-07-22 15:05:43.056457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.841 [2024-07-22 15:05:43.056466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:52:56.841 [2024-07-22 15:05:43.056485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.841 [2024-07-22 15:05:43.056494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:52:56.841 [2024-07-22 15:05:43.056512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.841 [2024-07-22 15:05:43.056522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:52:56.841 [2024-07-22 15:05:43.056541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.841 [2024-07-22 15:05:43.056550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:52:56.842 [2024-07-22 15:05:43.056569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.842 [2024-07-22 15:05:43.056578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:52:56.842 [2024-07-22 15:05:43.056605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.842 [2024-07-22 15:05:43.056614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:52:56.842 [2024-07-22 15:05:43.056633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.842 [2024-07-22 15:05:43.056642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:52:56.842 [2024-07-22 15:05:43.056661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:4216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.842 [2024-07-22 15:05:43.056679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:52:56.842 [2024-07-22 15:05:43.056698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.842 [2024-07-22 15:05:43.056707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:52:56.842 [2024-07-22 15:05:43.056727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.842 [2024-07-22 15:05:43.056735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:52:56.842 [2024-07-22 15:05:43.056756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.842 [2024-07-22 15:05:43.056764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:52:56.842 [2024-07-22 15:05:43.056789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.842 [2024-07-22 15:05:43.056798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.842 [2024-07-22 15:05:43.056817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.842 [2024-07-22 15:05:43.056826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:52:56.842 [2024-07-22 15:05:43.056845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.842 [2024-07-22 15:05:43.056854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:52:56.842 [2024-07-22 15:05:43.056873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.842 [2024-07-22 15:05:43.056881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:52:56.842 [2024-07-22 15:05:43.056902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.842 [2024-07-22 15:05:43.056910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:52:56.842 [2024-07-22 15:05:43.056929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.842 [2024-07-22 15:05:43.056938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:52:56.842 [2024-07-22 15:05:43.056958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.842 [2024-07-22 15:05:43.056967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:52:56.842 [2024-07-22 15:05:43.056986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.842 [2024-07-22 15:05:43.056995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:52:56.842 [2024-07-22 15:05:43.057014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.842 [2024-07-22 15:05:43.057022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:52:56.842 [2024-07-22 15:05:43.057041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.842 [2024-07-22 15:05:43.057050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:52:56.842 [2024-07-22 15:05:43.057069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.842 [2024-07-22 15:05:43.057077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:52:56.842 [2024-07-22 15:05:43.057096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.842 [2024-07-22 15:05:43.057106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:52:56.842 [2024-07-22 15:05:43.057129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.842 [2024-07-22 15:05:43.057138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:52:56.842 [2024-07-22 15:05:43.057157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.842 [2024-07-22 15:05:43.057166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:52:56.842 [2024-07-22 15:05:43.057185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.842 [2024-07-22 15:05:43.057193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:52:56.842 [2024-07-22 15:05:43.057212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.842 [2024-07-22 15:05:43.057222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:52:56.842 [2024-07-22 15:05:43.057242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.842 [2024-07-22 15:05:43.057251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:52:56.842 [2024-07-22 15:05:43.057270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.842 [2024-07-22 15:05:43.057278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:52:56.842 [2024-07-22 15:05:43.057297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.842 [2024-07-22 15:05:43.057306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:52:56.842 [2024-07-22 15:05:43.057325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.842 [2024-07-22 15:05:43.057334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:52:56.842 [2024-07-22 15:05:43.057354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.842 [2024-07-22 15:05:43.057363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:52:56.842 [2024-07-22 15:05:43.057382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.842 [2024-07-22 15:05:43.057392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:52:56.842 [2024-07-22 15:05:43.057411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:4320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.842 [2024-07-22 15:05:43.057420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:52:56.842 [2024-07-22 15:05:43.057439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.842 [2024-07-22 15:05:43.057448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:52:56.842 [2024-07-22 15:05:56.109082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:52:56.842 [2024-07-22 15:05:56.109146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.842 [2024-07-22 15:05:56.109158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:52:56.842 [2024-07-22 15:05:56.109167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.842 [2024-07-22 15:05:56.109177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:52:56.842 [2024-07-22 15:05:56.109186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.842 [2024-07-22 15:05:56.109196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:52:56.842 [2024-07-22 15:05:56.109204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.842 [2024-07-22 15:05:56.109213] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1b70 is same with the state(5) to be set 00:52:56.842 [2024-07-22 15:05:56.109253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:38464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.842 [2024-07-22 15:05:56.109265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.842 [2024-07-22 15:05:56.109285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:38472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.842 [2024-07-22 15:05:56.109294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.842 [2024-07-22 15:05:56.109306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:38480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.842 [2024-07-22 15:05:56.109315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.842 [2024-07-22 15:05:56.109326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:38488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.843 [2024-07-22 15:05:56.109335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.843 [2024-07-22 15:05:56.109345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:38496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.843 [2024-07-22 15:05:56.109354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.843 [2024-07-22 15:05:56.109365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:38504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.843 [2024-07-22 15:05:56.109374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.843 [2024-07-22 15:05:56.109385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:38512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.843 [2024-07-22 15:05:56.109394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.843 [2024-07-22 15:05:56.109405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:38520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.843 [2024-07-22 15:05:56.109414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.843 [2024-07-22 15:05:56.109424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:38528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.843 [2024-07-22 15:05:56.109440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.843 [2024-07-22 15:05:56.109451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:38536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.843 [2024-07-22 15:05:56.109460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.843 [2024-07-22 15:05:56.109471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:38544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.843 [2024-07-22 15:05:56.109480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.843 [2024-07-22 15:05:56.109491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:38552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.843 [2024-07-22 15:05:56.109500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.843 [2024-07-22 15:05:56.109511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:38560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.843 [2024-07-22 15:05:56.109520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.843 [2024-07-22 15:05:56.109531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:38568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.843 [2024-07-22 15:05:56.109541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.843 [2024-07-22 15:05:56.109551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:38576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.843 [2024-07-22 15:05:56.109561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.843 [2024-07-22 15:05:56.109572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.843 [2024-07-22 15:05:56.109581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.843 [2024-07-22 15:05:56.109591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:38592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.843 [2024-07-22 15:05:56.109600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.843 [2024-07-22 15:05:56.109611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:38600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.843 [2024-07-22 15:05:56.109620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.843 [2024-07-22 15:05:56.109630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:38608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.843 [2024-07-22 15:05:56.109640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.843 [2024-07-22 15:05:56.109650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:38616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.843 [2024-07-22 15:05:56.109659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.843 [2024-07-22 15:05:56.109684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:38624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.843 [2024-07-22 15:05:56.109694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.843 [2024-07-22 15:05:56.109709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.843 [2024-07-22 15:05:56.109719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.843 [2024-07-22 15:05:56.109730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:38640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.843 [2024-07-22 15:05:56.109739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.843 [2024-07-22 15:05:56.109749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.843 [2024-07-22 15:05:56.109759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.843 [2024-07-22 15:05:56.109769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.843 [2024-07-22 15:05:56.109778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.843 [2024-07-22 15:05:56.109789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:38664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.843 [2024-07-22 15:05:56.109798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.843 [2024-07-22 15:05:56.109809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:38672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.843 [2024-07-22 15:05:56.109819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.843 [2024-07-22 15:05:56.109830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:38680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.843 [2024-07-22 15:05:56.109839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.843 [2024-07-22 15:05:56.109851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:38688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.843 [2024-07-22 15:05:56.109860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.843 [2024-07-22 15:05:56.109871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:38696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.843 [2024-07-22 15:05:56.109880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.843 [2024-07-22 15:05:56.109891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:38704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.843 [2024-07-22 15:05:56.109900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.843 [2024-07-22 15:05:56.109911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:38712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.843 [2024-07-22 15:05:56.109920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.843 [2024-07-22 15:05:56.109930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:38720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.843 [2024-07-22 15:05:56.109939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.843 [2024-07-22 15:05:56.109950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:38728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.843 [2024-07-22 15:05:56.109963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.843 [2024-07-22 15:05:56.109974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.843 [2024-07-22 15:05:56.109983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.843 [2024-07-22 15:05:56.109994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:38744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.843 [2024-07-22 15:05:56.110003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.843 [2024-07-22 15:05:56.110014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.843 [2024-07-22 15:05:56.110024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.843 [2024-07-22 15:05:56.110034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:38760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.843 [2024-07-22 15:05:56.110043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.843 [2024-07-22 15:05:56.110055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:38840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.843 [2024-07-22 15:05:56.110064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.843 [2024-07-22 15:05:56.110075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:38848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.843 [2024-07-22 15:05:56.110084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.843 [2024-07-22 15:05:56.110095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:38856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.843 [2024-07-22 15:05:56.110104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.843 [2024-07-22 15:05:56.110114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.843 [2024-07-22 15:05:56.110123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.843 [2024-07-22 15:05:56.110134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:38872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.843 [2024-07-22 15:05:56.110145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.844 [2024-07-22 15:05:56.110155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:38880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.844 [2024-07-22 15:05:56.110165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.844 [2024-07-22 15:05:56.110179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:38888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.844 [2024-07-22 15:05:56.110189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.844 [2024-07-22 15:05:56.110199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:38896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.844 [2024-07-22 15:05:56.110208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.844 [2024-07-22 15:05:56.110218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:38904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.844 [2024-07-22 15:05:56.110232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.844 [2024-07-22 15:05:56.110243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:38912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.844 [2024-07-22 15:05:56.110252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.844 [2024-07-22 15:05:56.110262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:38920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.844 [2024-07-22 15:05:56.110271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.844 [2024-07-22 15:05:56.110282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.844 [2024-07-22 15:05:56.110291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.844 [2024-07-22 15:05:56.110302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:38936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.844 [2024-07-22 15:05:56.110311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.844 [2024-07-22 15:05:56.110322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:38944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.844 [2024-07-22 15:05:56.110331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.844 [2024-07-22 15:05:56.110342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:38952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.844 [2024-07-22 15:05:56.110351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.844 [2024-07-22 15:05:56.110361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:38960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.844 [2024-07-22 15:05:56.110370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.844 [2024-07-22 15:05:56.110381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:38968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.844 [2024-07-22 15:05:56.110391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.844 [2024-07-22 15:05:56.110401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:38976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.844 [2024-07-22 15:05:56.110410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.844 [2024-07-22 15:05:56.110421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:38984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.844 [2024-07-22 15:05:56.110430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.844 [2024-07-22 15:05:56.110441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:38992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.844 [2024-07-22 15:05:56.110450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.844 [2024-07-22 15:05:56.110460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:39000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.844 [2024-07-22 15:05:56.110469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.844 [2024-07-22 15:05:56.110484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.844 [2024-07-22 15:05:56.110494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.844 [2024-07-22 15:05:56.110506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:39016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.844 [2024-07-22 15:05:56.110515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.844 [2024-07-22 15:05:56.110525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.844 [2024-07-22 15:05:56.110534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.844 [2024-07-22 15:05:56.110545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:39032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.844 [2024-07-22 15:05:56.110554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.844 [2024-07-22 15:05:56.110565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:39040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.844 [2024-07-22 15:05:56.110574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.844 [2024-07-22 15:05:56.110584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.844 [2024-07-22 15:05:56.110594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.844 [2024-07-22 15:05:56.110604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:39056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.844 [2024-07-22 15:05:56.110613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.844 [2024-07-22 15:05:56.110624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.844 [2024-07-22 15:05:56.110633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.844 [2024-07-22 15:05:56.110644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.844 [2024-07-22 15:05:56.110653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.844 [2024-07-22 15:05:56.110664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.844 [2024-07-22 15:05:56.110681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.844 [2024-07-22 15:05:56.110692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:39088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.844 [2024-07-22 15:05:56.110701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.844 [2024-07-22 15:05:56.110712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:39096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.844 [2024-07-22 15:05:56.110721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.844 [2024-07-22 15:05:56.110731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:39104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.844 [2024-07-22 15:05:56.110744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.844 [2024-07-22 15:05:56.110755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:39112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.844 [2024-07-22 15:05:56.110764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.844 [2024-07-22 15:05:56.110775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.844 [2024-07-22 15:05:56.110784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.844 [2024-07-22 15:05:56.110795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:39128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.844 [2024-07-22 15:05:56.110804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.844 [2024-07-22 15:05:56.110815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.844 [2024-07-22 15:05:56.110824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.844 [2024-07-22 15:05:56.110836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:39144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.844 [2024-07-22 15:05:56.110845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.844 [2024-07-22 15:05:56.110855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:39152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.844 [2024-07-22 15:05:56.110865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.844 [2024-07-22 15:05:56.110875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:39160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.844 [2024-07-22 15:05:56.110884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.844 [2024-07-22 15:05:56.110895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.844 [2024-07-22 15:05:56.110904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.844 [2024-07-22 15:05:56.110914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:39176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.844 [2024-07-22 15:05:56.110923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.844 [2024-07-22 15:05:56.110934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.844 [2024-07-22 15:05:56.110943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.844 [2024-07-22 15:05:56.110953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:39192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.844 [2024-07-22 15:05:56.110963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.845 [2024-07-22 15:05:56.110973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:39200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.845 [2024-07-22 15:05:56.110982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.845 [2024-07-22 15:05:56.110994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:39208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.845 [2024-07-22 15:05:56.111007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.845 [2024-07-22 15:05:56.111018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.845 [2024-07-22 15:05:56.111027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.845 [2024-07-22 15:05:56.111038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.845 [2024-07-22 15:05:56.111047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.845 [2024-07-22 15:05:56.111057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:38784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.845 [2024-07-22 15:05:56.111066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.845 [2024-07-22 15:05:56.111077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.845 [2024-07-22 15:05:56.111086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.845 [2024-07-22 15:05:56.111097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.845 [2024-07-22 15:05:56.111106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.845 [2024-07-22 15:05:56.111117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:38808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.845 [2024-07-22 15:05:56.111126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.845 [2024-07-22 15:05:56.111136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.845 [2024-07-22 15:05:56.111145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.845 [2024-07-22 15:05:56.111157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:38824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.845 [2024-07-22 15:05:56.111166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.845 [2024-07-22 15:05:56.111177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:38832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:52:56.845 [2024-07-22 15:05:56.111187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.845 [2024-07-22 15:05:56.111197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:39216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.845 [2024-07-22 15:05:56.111206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.845 [2024-07-22 15:05:56.111217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:39224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.845 [2024-07-22 15:05:56.111226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.845 [2024-07-22 15:05:56.111236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.845 [2024-07-22 15:05:56.111245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.845 [2024-07-22 15:05:56.111267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.845 [2024-07-22 15:05:56.111276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.845 [2024-07-22 15:05:56.111287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:39248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.845 [2024-07-22 15:05:56.111296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.845 [2024-07-22 15:05:56.111307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:39256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.845 [2024-07-22 15:05:56.111316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.845 [2024-07-22 15:05:56.111326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:39264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.845 [2024-07-22 15:05:56.111336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.845 [2024-07-22 15:05:56.111347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:39272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.845 [2024-07-22 15:05:56.111357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.845 [2024-07-22 15:05:56.111367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.845 [2024-07-22 15:05:56.111376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.845 [2024-07-22 15:05:56.111386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:39288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.845 [2024-07-22 15:05:56.111396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.845 [2024-07-22 15:05:56.111406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:39296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.845 [2024-07-22 15:05:56.111416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.845 [2024-07-22 15:05:56.111427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.845 [2024-07-22 15:05:56.111436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.845 [2024-07-22 15:05:56.111447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:39312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.845 [2024-07-22 15:05:56.111456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.845 [2024-07-22 15:05:56.111467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:39320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.845 [2024-07-22 15:05:56.111476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.845 [2024-07-22 15:05:56.111488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:39328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.845 [2024-07-22 15:05:56.111497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.845 [2024-07-22 15:05:56.111509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.845 [2024-07-22 15:05:56.111522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.845 [2024-07-22 15:05:56.111533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.845 [2024-07-22 15:05:56.111542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.845 [2024-07-22 15:05:56.111552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.845 [2024-07-22 15:05:56.111562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.845 [2024-07-22 15:05:56.111573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.845 [2024-07-22 15:05:56.111582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.845 [2024-07-22 15:05:56.111593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.845 [2024-07-22 15:05:56.111603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.845 [2024-07-22 15:05:56.111614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.845 [2024-07-22 15:05:56.111623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.845 [2024-07-22 15:05:56.111633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.845 [2024-07-22 15:05:56.111642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.845 [2024-07-22 15:05:56.111653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.845 [2024-07-22 15:05:56.111662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.845 [2024-07-22 15:05:56.111680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.845 [2024-07-22 15:05:56.111689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.845 [2024-07-22 15:05:56.111699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.845 [2024-07-22 15:05:56.111708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.845 [2024-07-22 15:05:56.111719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.845 [2024-07-22 15:05:56.111728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.845 [2024-07-22 15:05:56.111738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.845 [2024-07-22 15:05:56.111747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.845 [2024-07-22 15:05:56.111758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.845 [2024-07-22 15:05:56.111767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.845 [2024-07-22 15:05:56.111782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.845 [2024-07-22 15:05:56.111791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.846 [2024-07-22 15:05:56.111802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.846 [2024-07-22 15:05:56.111811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.846 [2024-07-22 15:05:56.111823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.846 [2024-07-22 15:05:56.111832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.846 [2024-07-22 15:05:56.111843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.846 [2024-07-22 15:05:56.111852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.846 [2024-07-22 15:05:56.111863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:52:56.846 [2024-07-22 15:05:56.111872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.846 [2024-07-22 15:05:56.111894] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:52:56.846 [2024-07-22 15:05:56.111901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:52:56.846 [2024-07-22 15:05:56.111909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39480 len:8 PRP1 0x0 PRP2 0x0 00:52:56.846 [2024-07-22 15:05:56.111918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:52:56.846 [2024-07-22 15:05:56.111976] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11da900 was disconnected and freed. reset controller. 00:52:56.846 [2024-07-22 15:05:56.113228] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:52:56.846 [2024-07-22 15:05:56.113263] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e1b70 (9): Bad file descriptor 00:52:56.846 [2024-07-22 15:05:56.113348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:52:56.846 [2024-07-22 15:05:56.113364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11e1b70 with addr=10.0.0.2, port=4421 00:52:56.846 [2024-07-22 15:05:56.113375] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e1b70 is same with the state(5) to be set 00:52:56.846 [2024-07-22 15:05:56.113391] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e1b70 (9): Bad file descriptor 00:52:56.846 [2024-07-22 15:05:56.113404] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:52:56.846 [2024-07-22 15:05:56.113414] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:52:56.846 [2024-07-22 15:05:56.113431] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:52:56.846 [2024-07-22 15:05:56.113451] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:52:56.846 [2024-07-22 15:05:56.113460] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:52:56.846 [2024-07-22 15:06:06.182008] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:52:56.846 Received shutdown signal, test time was about 54.173205 seconds 00:52:56.846 00:52:56.846 Latency(us) 00:52:56.846 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:52:56.846 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:52:56.846 Verification LBA range: start 0x0 length 0x4000 00:52:56.846 Nvme0n1 : 54.17 9713.58 37.94 0.00 0.00 13158.11 1209.12 7033243.39 00:52:56.846 =================================================================================================================== 00:52:56.846 Total : 9713.58 37.94 0.00 0.00 13158.11 1209.12 7033243.39 00:52:56.846 15:06:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:52:57.106 15:06:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:52:57.106 15:06:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:52:57.106 15:06:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:52:57.106 15:06:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:52:57.106 15:06:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:52:57.106 15:06:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:52:57.106 15:06:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:52:57.106 15:06:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:52:57.106 15:06:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:52:57.106 rmmod nvme_tcp 00:52:57.106 rmmod nvme_fabrics 00:52:57.106 rmmod nvme_keyring 00:52:57.106 15:06:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:52:57.106 15:06:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:52:57.106 15:06:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:52:57.106 15:06:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 112585 ']' 00:52:57.106 15:06:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 112585 00:52:57.106 15:06:16 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@946 -- # '[' -z 112585 ']' 00:52:57.106 15:06:16 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@950 -- # kill -0 112585 00:52:57.106 15:06:16 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@951 -- # uname 00:52:57.106 15:06:16 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:52:57.106 15:06:16 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 112585 00:52:57.106 15:06:16 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:52:57.106 15:06:16 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:52:57.106 killing process with pid 112585 00:52:57.106 15:06:16 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@964 -- # echo 'killing process with pid 112585' 00:52:57.106 15:06:16 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@965 -- # kill 112585 00:52:57.106 15:06:16 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@970 -- # wait 112585 00:52:57.400 15:06:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:52:57.400 15:06:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:52:57.400 15:06:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:52:57.400 15:06:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:52:57.400 15:06:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:52:57.400 15:06:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:52:57.400 15:06:16 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:52:57.400 15:06:16 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:52:57.400 15:06:16 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:52:57.400 00:52:57.400 real 0m59.211s 00:52:57.400 user 2m50.421s 00:52:57.400 sys 0m10.471s 00:52:57.400 15:06:16 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:52:57.400 15:06:16 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:52:57.400 ************************************ 00:52:57.400 END TEST nvmf_host_multipath 00:52:57.400 ************************************ 00:52:57.400 15:06:16 nvmf_tcp -- nvmf/nvmf.sh@118 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:52:57.400 15:06:16 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:52:57.401 15:06:16 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:52:57.401 15:06:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:52:57.401 ************************************ 00:52:57.401 START TEST nvmf_timeout 00:52:57.401 ************************************ 00:52:57.401 15:06:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:52:57.663 * Looking for test storage... 00:52:57.663 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:52:57.663 15:06:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:52:57.663 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:52:57.663 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:52:57.663 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:52:57.663 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:52:57.663 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:52:57.663 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:52:57.663 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:52:57.663 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:52:57.663 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:52:57.663 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:52:57.663 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:52:57.663 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:52:57.663 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:52:57.663 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:52:57.663 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:52:57.663 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:52:57.663 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:52:57.663 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:52:57.663 15:06:17 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:52:57.663 15:06:17 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:52:57.663 15:06:17 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:52:57.663 15:06:17 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:57.663 15:06:17 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:57.663 15:06:17 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:57.663 15:06:17 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:52:57.663 15:06:17 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:52:57.664 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:52:57.664 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:52:57.664 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:52:57.664 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:52:57.664 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:52:57.664 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:52:57.664 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:52:57.664 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:52:57.664 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:52:57.664 15:06:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:52:57.664 15:06:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:52:57.664 15:06:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:52:57.664 15:06:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:52:57.664 15:06:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:52:57.664 15:06:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:52:57.664 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:52:57.664 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:52:57.664 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:52:57.664 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:52:57.664 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:52:57.664 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:52:57.664 15:06:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:52:57.664 15:06:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:52:57.664 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:52:57.664 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:52:57.664 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:52:57.664 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:52:57.664 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:52:57.664 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:52:57.664 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:52:57.664 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:52:57.664 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:52:57.664 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:52:57.664 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:52:57.664 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:52:57.664 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:52:57.664 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:52:57.664 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:52:57.664 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:52:57.664 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:52:57.664 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:52:57.664 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:52:57.664 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:52:57.664 Cannot find device "nvmf_tgt_br" 00:52:57.664 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:52:57.664 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:52:57.664 Cannot find device "nvmf_tgt_br2" 00:52:57.664 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:52:57.664 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:52:57.664 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:52:57.664 Cannot find device "nvmf_tgt_br" 00:52:57.664 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:52:57.664 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:52:57.664 Cannot find device "nvmf_tgt_br2" 00:52:57.664 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:52:57.664 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:52:57.664 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:52:57.925 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:52:57.925 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:52:57.925 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:52:57.925 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:52:57.925 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:52:57.925 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:52:57.925 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:52:57.925 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:52:57.925 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:52:57.925 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:52:57.925 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:52:57.925 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:52:57.925 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:52:57.925 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:52:57.925 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:52:57.925 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:52:57.925 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:52:57.925 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:52:57.925 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:52:57.925 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:52:57.925 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:52:57.925 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:52:57.925 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:52:57.925 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:52:57.925 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:52:57.925 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:52:57.925 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:52:57.925 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:52:57.925 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:52:57.925 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:52:57.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:52:57.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:52:57.925 00:52:57.925 --- 10.0.0.2 ping statistics --- 00:52:57.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:52:57.925 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:52:57.925 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:52:57.925 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:52:57.925 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:52:57.925 00:52:57.925 --- 10.0.0.3 ping statistics --- 00:52:57.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:52:57.925 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:52:57.925 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:52:57.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:52:57.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:52:57.925 00:52:57.925 --- 10.0.0.1 ping statistics --- 00:52:57.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:52:57.925 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:52:57.925 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:52:57.925 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:52:57.925 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:52:57.925 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:52:57.925 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:52:57.925 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:52:57.925 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:52:57.925 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:52:57.925 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:52:57.925 15:06:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:52:57.925 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:52:57.925 15:06:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@720 -- # xtrace_disable 00:52:57.925 15:06:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:52:57.925 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=113934 00:52:57.925 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:52:57.925 15:06:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 113934 00:52:57.925 15:06:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@827 -- # '[' -z 113934 ']' 00:52:57.925 15:06:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:52:57.925 15:06:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:52:57.925 15:06:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:52:57.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:52:57.925 15:06:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:52:57.925 15:06:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:52:57.925 [2024-07-22 15:06:17.531840] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:52:57.925 [2024-07-22 15:06:17.531896] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:52:58.184 [2024-07-22 15:06:17.670405] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:52:58.184 [2024-07-22 15:06:17.713038] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:52:58.184 [2024-07-22 15:06:17.713109] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:52:58.184 [2024-07-22 15:06:17.713115] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:52:58.184 [2024-07-22 15:06:17.713120] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:52:58.184 [2024-07-22 15:06:17.713124] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:52:58.184 [2024-07-22 15:06:17.713449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:52:58.184 [2024-07-22 15:06:17.713448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:52:58.753 15:06:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:52:58.753 15:06:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@860 -- # return 0 00:52:58.753 15:06:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:52:58.753 15:06:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:52:58.753 15:06:18 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:52:59.011 15:06:18 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:52:59.011 15:06:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:52:59.011 15:06:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:52:59.011 [2024-07-22 15:06:18.576790] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:52:59.011 15:06:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:52:59.268 Malloc0 00:52:59.268 15:06:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:52:59.526 15:06:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:52:59.784 15:06:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:52:59.784 [2024-07-22 15:06:19.338100] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:52:59.784 15:06:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=114021 00:52:59.784 15:06:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:52:59.784 15:06:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 114021 /var/tmp/bdevperf.sock 00:52:59.784 15:06:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@827 -- # '[' -z 114021 ']' 00:52:59.784 15:06:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:52:59.784 15:06:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:52:59.784 15:06:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:52:59.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:52:59.784 15:06:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:52:59.784 15:06:19 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:52:59.784 [2024-07-22 15:06:19.404589] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:52:59.784 [2024-07-22 15:06:19.404652] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114021 ] 00:53:00.043 [2024-07-22 15:06:19.541598] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:00.043 [2024-07-22 15:06:19.586417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:53:00.612 15:06:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:53:00.612 15:06:20 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@860 -- # return 0 00:53:00.612 15:06:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:53:00.872 15:06:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:53:01.130 NVMe0n1 00:53:01.130 15:06:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:53:01.130 15:06:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=114063 00:53:01.130 15:06:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:53:01.130 Running I/O for 10 seconds... 00:53:02.069 15:06:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:53:02.331 [2024-07-22 15:06:21.873514] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b8d60 is same with the state(5) to be set 00:53:02.331 [2024-07-22 15:06:21.873565] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b8d60 is same with the state(5) to be set 00:53:02.331 [2024-07-22 15:06:21.873571] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b8d60 is same with the state(5) to be set 00:53:02.331 [2024-07-22 15:06:21.873576] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b8d60 is same with the state(5) to be set 00:53:02.331 [2024-07-22 15:06:21.873580] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b8d60 is same with the state(5) to be set 00:53:02.331 [2024-07-22 15:06:21.873584] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b8d60 is same with the state(5) to be set 00:53:02.331 [2024-07-22 15:06:21.873589] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b8d60 is same with the state(5) to be set 00:53:02.331 [2024-07-22 15:06:21.873850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:105120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.331 [2024-07-22 15:06:21.873881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.331 [2024-07-22 15:06:21.873897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:105128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.331 [2024-07-22 15:06:21.873903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.331 [2024-07-22 15:06:21.873910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:104552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.331 [2024-07-22 15:06:21.873916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.331 [2024-07-22 15:06:21.873922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:104560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.331 [2024-07-22 15:06:21.873928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.332 [2024-07-22 15:06:21.873934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:104568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.332 [2024-07-22 15:06:21.873940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.332 [2024-07-22 15:06:21.873947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:104576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.332 [2024-07-22 15:06:21.873952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.332 [2024-07-22 15:06:21.873959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:104584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.332 [2024-07-22 15:06:21.873964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.332 [2024-07-22 15:06:21.873971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:104592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.332 [2024-07-22 15:06:21.873976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.332 [2024-07-22 15:06:21.873983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:104600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.332 [2024-07-22 15:06:21.873988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.332 [2024-07-22 15:06:21.873995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:104608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.332 [2024-07-22 15:06:21.874000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.332 [2024-07-22 15:06:21.874007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:104616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.332 [2024-07-22 15:06:21.874012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.332 [2024-07-22 15:06:21.874018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.332 [2024-07-22 15:06:21.874023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.332 [2024-07-22 15:06:21.874030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.332 [2024-07-22 15:06:21.874035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.332 [2024-07-22 15:06:21.874043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:104640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.332 [2024-07-22 15:06:21.874048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.332 [2024-07-22 15:06:21.874055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:104648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.332 [2024-07-22 15:06:21.874061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.332 [2024-07-22 15:06:21.874067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:104656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.332 [2024-07-22 15:06:21.874072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.332 [2024-07-22 15:06:21.874079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:104664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.332 [2024-07-22 15:06:21.874087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.332 [2024-07-22 15:06:21.874093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:104672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.332 [2024-07-22 15:06:21.874099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.332 [2024-07-22 15:06:21.874106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:104680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.332 [2024-07-22 15:06:21.874111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.332 [2024-07-22 15:06:21.874118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:104688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.332 [2024-07-22 15:06:21.874123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.332 [2024-07-22 15:06:21.874130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:104696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.332 [2024-07-22 15:06:21.874135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.332 [2024-07-22 15:06:21.874141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:104704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.332 [2024-07-22 15:06:21.874147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.332 [2024-07-22 15:06:21.874154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:104712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.332 [2024-07-22 15:06:21.874159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.332 [2024-07-22 15:06:21.874166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.332 [2024-07-22 15:06:21.874171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.332 [2024-07-22 15:06:21.874178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:104728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.332 [2024-07-22 15:06:21.874183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.332 [2024-07-22 15:06:21.874190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:104736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.332 [2024-07-22 15:06:21.874196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.332 [2024-07-22 15:06:21.874203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:104744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.332 [2024-07-22 15:06:21.874210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.332 [2024-07-22 15:06:21.874217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:104752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.332 [2024-07-22 15:06:21.874223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.332 [2024-07-22 15:06:21.874229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:104760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.332 [2024-07-22 15:06:21.874234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.332 [2024-07-22 15:06:21.874241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:104768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.332 [2024-07-22 15:06:21.874246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.332 [2024-07-22 15:06:21.874253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:104776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.332 [2024-07-22 15:06:21.874259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.332 [2024-07-22 15:06:21.874265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:104784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.332 [2024-07-22 15:06:21.874271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.332 [2024-07-22 15:06:21.874278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:104792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.332 [2024-07-22 15:06:21.874284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.333 [2024-07-22 15:06:21.874292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.333 [2024-07-22 15:06:21.874297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.333 [2024-07-22 15:06:21.874304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:104808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.333 [2024-07-22 15:06:21.874310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.333 [2024-07-22 15:06:21.874317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:104816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.333 [2024-07-22 15:06:21.874322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.333 [2024-07-22 15:06:21.874329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:104824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.333 [2024-07-22 15:06:21.874334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.333 [2024-07-22 15:06:21.874341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:104832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.333 [2024-07-22 15:06:21.874346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.333 [2024-07-22 15:06:21.874353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:104840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.333 [2024-07-22 15:06:21.874359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.333 [2024-07-22 15:06:21.874365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:104848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.333 [2024-07-22 15:06:21.874371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.333 [2024-07-22 15:06:21.874378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:104856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.333 [2024-07-22 15:06:21.874382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.333 [2024-07-22 15:06:21.874389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:104864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.333 [2024-07-22 15:06:21.874394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.333 [2024-07-22 15:06:21.874401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:104872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.333 [2024-07-22 15:06:21.874406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.333 [2024-07-22 15:06:21.874413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:104880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.333 [2024-07-22 15:06:21.874418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.333 [2024-07-22 15:06:21.874424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:104888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.333 [2024-07-22 15:06:21.874429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.333 [2024-07-22 15:06:21.874436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:104896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.333 [2024-07-22 15:06:21.874441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.333 [2024-07-22 15:06:21.874448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:104904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.333 [2024-07-22 15:06:21.874453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.333 [2024-07-22 15:06:21.874459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:104912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.333 [2024-07-22 15:06:21.874464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.333 [2024-07-22 15:06:21.874471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:104920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.333 [2024-07-22 15:06:21.874477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.333 [2024-07-22 15:06:21.874484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:104928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.333 [2024-07-22 15:06:21.874489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.333 [2024-07-22 15:06:21.874496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:104936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.333 [2024-07-22 15:06:21.874501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.333 [2024-07-22 15:06:21.874508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:104944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.333 [2024-07-22 15:06:21.874513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.333 [2024-07-22 15:06:21.874519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:104952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.333 [2024-07-22 15:06:21.874525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.333 [2024-07-22 15:06:21.874531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:104960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.333 [2024-07-22 15:06:21.874536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.333 [2024-07-22 15:06:21.874543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:104968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.333 [2024-07-22 15:06:21.874548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.333 [2024-07-22 15:06:21.874555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:104976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.333 [2024-07-22 15:06:21.874560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.333 [2024-07-22 15:06:21.874566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:104984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.333 [2024-07-22 15:06:21.874572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.333 [2024-07-22 15:06:21.874578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:104992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.333 [2024-07-22 15:06:21.874583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.333 [2024-07-22 15:06:21.874591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:105000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.333 [2024-07-22 15:06:21.874596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.333 [2024-07-22 15:06:21.874602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:105008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.333 [2024-07-22 15:06:21.874607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.333 [2024-07-22 15:06:21.874614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:105016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.333 [2024-07-22 15:06:21.874619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.333 [2024-07-22 15:06:21.874626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.333 [2024-07-22 15:06:21.874631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.333 [2024-07-22 15:06:21.874638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:105032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.334 [2024-07-22 15:06:21.874644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.334 [2024-07-22 15:06:21.874651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:105040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.334 [2024-07-22 15:06:21.874656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.334 [2024-07-22 15:06:21.874663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:105048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.334 [2024-07-22 15:06:21.874684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.334 [2024-07-22 15:06:21.874695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:105056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.334 [2024-07-22 15:06:21.874703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.334 [2024-07-22 15:06:21.874710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:105136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.334 [2024-07-22 15:06:21.874715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.334 [2024-07-22 15:06:21.874722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:105144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.334 [2024-07-22 15:06:21.874727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.334 [2024-07-22 15:06:21.874734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.334 [2024-07-22 15:06:21.874739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.334 [2024-07-22 15:06:21.874746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:105160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.334 [2024-07-22 15:06:21.874751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.334 [2024-07-22 15:06:21.874758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:105168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.334 [2024-07-22 15:06:21.874764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.334 [2024-07-22 15:06:21.874770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:105176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.334 [2024-07-22 15:06:21.874776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.334 [2024-07-22 15:06:21.874782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:105184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.334 [2024-07-22 15:06:21.874787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.334 [2024-07-22 15:06:21.874794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:105192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.334 [2024-07-22 15:06:21.874799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.334 [2024-07-22 15:06:21.874806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:105200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.334 [2024-07-22 15:06:21.874811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.334 [2024-07-22 15:06:21.874817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:105208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.334 [2024-07-22 15:06:21.874823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.334 [2024-07-22 15:06:21.874830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:105216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.334 [2024-07-22 15:06:21.874835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.334 [2024-07-22 15:06:21.874842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:105224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.334 [2024-07-22 15:06:21.874847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.334 [2024-07-22 15:06:21.874854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:105232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.334 [2024-07-22 15:06:21.874859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.334 [2024-07-22 15:06:21.874865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:105240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.334 [2024-07-22 15:06:21.874871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.334 [2024-07-22 15:06:21.874877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:105248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.334 [2024-07-22 15:06:21.874886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.334 [2024-07-22 15:06:21.874892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:105256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.334 [2024-07-22 15:06:21.874898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.334 [2024-07-22 15:06:21.874904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:105264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.334 [2024-07-22 15:06:21.874909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.334 [2024-07-22 15:06:21.874916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:105272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.334 [2024-07-22 15:06:21.874921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.334 [2024-07-22 15:06:21.874928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:105280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.334 [2024-07-22 15:06:21.874933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.334 [2024-07-22 15:06:21.874940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:105288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.334 [2024-07-22 15:06:21.874945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.334 [2024-07-22 15:06:21.874951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:105296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.334 [2024-07-22 15:06:21.874956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.334 [2024-07-22 15:06:21.874963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:105304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.334 [2024-07-22 15:06:21.874968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.334 [2024-07-22 15:06:21.874976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:105312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.334 [2024-07-22 15:06:21.874981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.334 [2024-07-22 15:06:21.874988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.334 [2024-07-22 15:06:21.874993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.334 [2024-07-22 15:06:21.875000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.334 [2024-07-22 15:06:21.875005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.334 [2024-07-22 15:06:21.875012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:105336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.334 [2024-07-22 15:06:21.875017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.335 [2024-07-22 15:06:21.875024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:105344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.335 [2024-07-22 15:06:21.875029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.335 [2024-07-22 15:06:21.875036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:105352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.335 [2024-07-22 15:06:21.875041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.335 [2024-07-22 15:06:21.875047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:105360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.335 [2024-07-22 15:06:21.875053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.335 [2024-07-22 15:06:21.875059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:105368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.335 [2024-07-22 15:06:21.875065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.335 [2024-07-22 15:06:21.875071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:105376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.335 [2024-07-22 15:06:21.875078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.335 [2024-07-22 15:06:21.875085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:105384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.335 [2024-07-22 15:06:21.875090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.335 [2024-07-22 15:06:21.875096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:105392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.335 [2024-07-22 15:06:21.875101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.335 [2024-07-22 15:06:21.875108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:105400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.335 [2024-07-22 15:06:21.875113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.335 [2024-07-22 15:06:21.875119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:105408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.335 [2024-07-22 15:06:21.875125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.335 [2024-07-22 15:06:21.875132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:105416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.335 [2024-07-22 15:06:21.875137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.335 [2024-07-22 15:06:21.875148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:105424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.335 [2024-07-22 15:06:21.875153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.335 [2024-07-22 15:06:21.875160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:105432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.335 [2024-07-22 15:06:21.875166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.335 [2024-07-22 15:06:21.875172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:105440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.335 [2024-07-22 15:06:21.875177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.335 [2024-07-22 15:06:21.875184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:105448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.335 [2024-07-22 15:06:21.875189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.335 [2024-07-22 15:06:21.875196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:105456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.335 [2024-07-22 15:06:21.875202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.335 [2024-07-22 15:06:21.875209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:105464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.335 [2024-07-22 15:06:21.875214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.335 [2024-07-22 15:06:21.875221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:105472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.335 [2024-07-22 15:06:21.875226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.335 [2024-07-22 15:06:21.875233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.335 [2024-07-22 15:06:21.875237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.335 [2024-07-22 15:06:21.875244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:105488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.335 [2024-07-22 15:06:21.875249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.335 [2024-07-22 15:06:21.875256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:105496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.335 [2024-07-22 15:06:21.875261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.335 [2024-07-22 15:06:21.875267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:105504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.335 [2024-07-22 15:06:21.875275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.335 [2024-07-22 15:06:21.875282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:105064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.335 [2024-07-22 15:06:21.875288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.335 [2024-07-22 15:06:21.875294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:105072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.335 [2024-07-22 15:06:21.875299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.335 [2024-07-22 15:06:21.875306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.335 [2024-07-22 15:06:21.875311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.335 [2024-07-22 15:06:21.875318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:105088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.335 [2024-07-22 15:06:21.875323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.335 [2024-07-22 15:06:21.875330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:105096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.335 [2024-07-22 15:06:21.875335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.335 [2024-07-22 15:06:21.875343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:105104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.335 [2024-07-22 15:06:21.875348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.335 [2024-07-22 15:06:21.875354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:105112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:02.335 [2024-07-22 15:06:21.875360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.335 [2024-07-22 15:06:21.875366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:105512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.335 [2024-07-22 15:06:21.875371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.336 [2024-07-22 15:06:21.875378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:105520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.336 [2024-07-22 15:06:21.875384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.336 [2024-07-22 15:06:21.875390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:105528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.336 [2024-07-22 15:06:21.875395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.336 [2024-07-22 15:06:21.875401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:105536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.336 [2024-07-22 15:06:21.875407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.336 [2024-07-22 15:06:21.875413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:105544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.336 [2024-07-22 15:06:21.875418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.336 [2024-07-22 15:06:21.875425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:105552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.336 [2024-07-22 15:06:21.875431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.336 [2024-07-22 15:06:21.875438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:105560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:02.336 [2024-07-22 15:06:21.875443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.336 [2024-07-22 15:06:21.875449] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10acd40 is same with the state(5) to be set 00:53:02.336 [2024-07-22 15:06:21.875456] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:53:02.336 [2024-07-22 15:06:21.875461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:53:02.336 [2024-07-22 15:06:21.875467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105568 len:8 PRP1 0x0 PRP2 0x0 00:53:02.336 [2024-07-22 15:06:21.875472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:02.336 [2024-07-22 15:06:21.875514] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10acd40 was disconnected and freed. reset controller. 00:53:02.336 [2024-07-22 15:06:21.875722] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:53:02.336 [2024-07-22 15:06:21.875781] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1079c60 (9): Bad file descriptor 00:53:02.336 [2024-07-22 15:06:21.875845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:02.336 [2024-07-22 15:06:21.875855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1079c60 with addr=10.0.0.2, port=4420 00:53:02.336 [2024-07-22 15:06:21.875861] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079c60 is same with the state(5) to be set 00:53:02.336 [2024-07-22 15:06:21.875871] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1079c60 (9): Bad file descriptor 00:53:02.336 [2024-07-22 15:06:21.875880] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:53:02.336 [2024-07-22 15:06:21.875886] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:53:02.336 [2024-07-22 15:06:21.875896] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:53:02.336 [2024-07-22 15:06:21.875910] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:53:02.336 [2024-07-22 15:06:21.875916] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:53:02.336 15:06:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:53:04.873 [2024-07-22 15:06:23.872329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:04.873 [2024-07-22 15:06:23.872384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1079c60 with addr=10.0.0.2, port=4420 00:53:04.874 [2024-07-22 15:06:23.872394] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079c60 is same with the state(5) to be set 00:53:04.874 [2024-07-22 15:06:23.872414] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1079c60 (9): Bad file descriptor 00:53:04.874 [2024-07-22 15:06:23.872433] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:53:04.874 [2024-07-22 15:06:23.872439] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:53:04.874 [2024-07-22 15:06:23.872446] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:53:04.874 [2024-07-22 15:06:23.872468] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:53:04.874 [2024-07-22 15:06:23.872475] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:53:04.874 15:06:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:53:04.874 15:06:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:53:04.874 15:06:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:53:04.874 15:06:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:53:04.874 15:06:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:53:04.874 15:06:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:53:04.874 15:06:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:53:04.874 15:06:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:53:04.874 15:06:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:53:06.279 [2024-07-22 15:06:25.868851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:06.279 [2024-07-22 15:06:25.868906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1079c60 with addr=10.0.0.2, port=4420 00:53:06.279 [2024-07-22 15:06:25.868916] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1079c60 is same with the state(5) to be set 00:53:06.279 [2024-07-22 15:06:25.868935] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1079c60 (9): Bad file descriptor 00:53:06.279 [2024-07-22 15:06:25.868946] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:53:06.279 [2024-07-22 15:06:25.868952] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:53:06.279 [2024-07-22 15:06:25.868960] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:53:06.279 [2024-07-22 15:06:25.868982] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:53:06.279 [2024-07-22 15:06:25.868988] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:53:08.814 [2024-07-22 15:06:27.865264] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:53:08.814 [2024-07-22 15:06:27.865324] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:53:08.814 [2024-07-22 15:06:27.865331] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:53:08.814 [2024-07-22 15:06:27.865338] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:53:08.814 [2024-07-22 15:06:27.865359] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:53:09.383 00:53:09.383 Latency(us) 00:53:09.383 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:53:09.383 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:53:09.383 Verification LBA range: start 0x0 length 0x4000 00:53:09.383 NVMe0n1 : 8.12 1608.63 6.28 15.76 0.00 78858.34 1724.26 7033243.39 00:53:09.383 =================================================================================================================== 00:53:09.383 Total : 1608.63 6.28 15.76 0.00 78858.34 1724.26 7033243.39 00:53:09.383 0 00:53:09.951 15:06:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:53:09.951 15:06:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:53:09.951 15:06:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:53:09.951 15:06:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:53:09.951 15:06:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:53:09.951 15:06:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:53:09.951 15:06:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:53:10.210 15:06:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:53:10.210 15:06:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 114063 00:53:10.210 15:06:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 114021 00:53:10.210 15:06:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@946 -- # '[' -z 114021 ']' 00:53:10.210 15:06:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@950 -- # kill -0 114021 00:53:10.210 15:06:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # uname 00:53:10.210 15:06:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:53:10.210 15:06:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 114021 00:53:10.210 15:06:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:53:10.210 15:06:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:53:10.210 killing process with pid 114021 00:53:10.210 15:06:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 114021' 00:53:10.210 15:06:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # kill 114021 00:53:10.210 Received shutdown signal, test time was about 9.001622 seconds 00:53:10.210 00:53:10.210 Latency(us) 00:53:10.210 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:53:10.210 =================================================================================================================== 00:53:10.210 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:53:10.210 15:06:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@970 -- # wait 114021 00:53:10.469 15:06:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:53:10.728 [2024-07-22 15:06:30.101229] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:53:10.728 15:06:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:53:10.728 15:06:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=114221 00:53:10.728 15:06:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 114221 /var/tmp/bdevperf.sock 00:53:10.728 15:06:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@827 -- # '[' -z 114221 ']' 00:53:10.728 15:06:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:53:10.728 15:06:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:53:10.728 15:06:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:53:10.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:53:10.728 15:06:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:53:10.728 15:06:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:53:10.728 [2024-07-22 15:06:30.173448] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:53:10.728 [2024-07-22 15:06:30.173525] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114221 ] 00:53:10.728 [2024-07-22 15:06:30.313109] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:10.986 [2024-07-22 15:06:30.358235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:53:11.553 15:06:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:53:11.553 15:06:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@860 -- # return 0 00:53:11.553 15:06:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:53:11.812 15:06:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:53:12.071 NVMe0n1 00:53:12.071 15:06:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:53:12.071 15:06:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=114263 00:53:12.071 15:06:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:53:12.071 Running I/O for 10 seconds... 00:53:13.007 15:06:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:53:13.271 [2024-07-22 15:06:32.651234] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.271 [2024-07-22 15:06:32.651281] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.271 [2024-07-22 15:06:32.651287] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.271 [2024-07-22 15:06:32.651293] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.271 [2024-07-22 15:06:32.651298] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.271 [2024-07-22 15:06:32.651303] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.271 [2024-07-22 15:06:32.651308] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.271 [2024-07-22 15:06:32.651313] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.271 [2024-07-22 15:06:32.651318] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.271 [2024-07-22 15:06:32.651323] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.271 [2024-07-22 15:06:32.651327] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.271 [2024-07-22 15:06:32.651332] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.271 [2024-07-22 15:06:32.651337] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.271 [2024-07-22 15:06:32.651342] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.271 [2024-07-22 15:06:32.651347] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.271 [2024-07-22 15:06:32.651351] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.271 [2024-07-22 15:06:32.651356] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.271 [2024-07-22 15:06:32.651361] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.271 [2024-07-22 15:06:32.651365] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.271 [2024-07-22 15:06:32.651370] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.271 [2024-07-22 15:06:32.651374] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.271 [2024-07-22 15:06:32.651379] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.271 [2024-07-22 15:06:32.651384] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.271 [2024-07-22 15:06:32.651389] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.271 [2024-07-22 15:06:32.651393] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.271 [2024-07-22 15:06:32.651398] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.271 [2024-07-22 15:06:32.651402] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.271 [2024-07-22 15:06:32.651406] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.271 [2024-07-22 15:06:32.651411] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.271 [2024-07-22 15:06:32.651416] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651420] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651425] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651429] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651434] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651439] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651444] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651449] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651453] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651458] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651463] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651468] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651473] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651478] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651484] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651488] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651493] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651497] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651502] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651507] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651512] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651533] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651539] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651544] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651549] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651554] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651559] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651564] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651569] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651575] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651582] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651587] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651592] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651597] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651603] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651608] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651614] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651620] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651625] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651630] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651636] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651642] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651647] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651652] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651658] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651664] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651669] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651675] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651680] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651696] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651702] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651708] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651713] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651719] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651724] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651730] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651735] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651740] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651747] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651752] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.651758] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdd50 is same with the state(5) to be set 00:53:13.272 [2024-07-22 15:06:32.652324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:106792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.272 [2024-07-22 15:06:32.652359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.272 [2024-07-22 15:06:32.652375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.272 [2024-07-22 15:06:32.652383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.272 [2024-07-22 15:06:32.652391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:106808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.272 [2024-07-22 15:06:32.652403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.272 [2024-07-22 15:06:32.652411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:106816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.272 [2024-07-22 15:06:32.652417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.272 [2024-07-22 15:06:32.652423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:106824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.272 [2024-07-22 15:06:32.652442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.272 [2024-07-22 15:06:32.652450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:106832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.272 [2024-07-22 15:06:32.652467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.272 [2024-07-22 15:06:32.652476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:106840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.272 [2024-07-22 15:06:32.652481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.272 [2024-07-22 15:06:32.652497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:106848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.272 [2024-07-22 15:06:32.652503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.272 [2024-07-22 15:06:32.652510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:106856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.272 [2024-07-22 15:06:32.652527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.272 [2024-07-22 15:06:32.652536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:106864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.272 [2024-07-22 15:06:32.652542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.273 [2024-07-22 15:06:32.652557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:106872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.273 [2024-07-22 15:06:32.652564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.273 [2024-07-22 15:06:32.652577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:106880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.273 [2024-07-22 15:06:32.652587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.273 [2024-07-22 15:06:32.652601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:106888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.273 [2024-07-22 15:06:32.652607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.273 [2024-07-22 15:06:32.652614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:106896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.273 [2024-07-22 15:06:32.652619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.273 [2024-07-22 15:06:32.652628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:106904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.273 [2024-07-22 15:06:32.652643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.273 [2024-07-22 15:06:32.652659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:106912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.273 [2024-07-22 15:06:32.652684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.273 [2024-07-22 15:06:32.652694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:106920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.273 [2024-07-22 15:06:32.652705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.273 [2024-07-22 15:06:32.652712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:106928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.273 [2024-07-22 15:06:32.652720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.273 [2024-07-22 15:06:32.652727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:106936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.273 [2024-07-22 15:06:32.652733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.273 [2024-07-22 15:06:32.652743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:106944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.273 [2024-07-22 15:06:32.652756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.273 [2024-07-22 15:06:32.652768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.273 [2024-07-22 15:06:32.652784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.273 [2024-07-22 15:06:32.652793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:106960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.273 [2024-07-22 15:06:32.652799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.273 [2024-07-22 15:06:32.652817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:106968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.273 [2024-07-22 15:06:32.652823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.273 [2024-07-22 15:06:32.652830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:106976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.273 [2024-07-22 15:06:32.652835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.273 [2024-07-22 15:06:32.652844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:106984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.273 [2024-07-22 15:06:32.652852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.273 [2024-07-22 15:06:32.652859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:106992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.273 [2024-07-22 15:06:32.652867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.273 [2024-07-22 15:06:32.652876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:107000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.273 [2024-07-22 15:06:32.652882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.273 [2024-07-22 15:06:32.652888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:107008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.273 [2024-07-22 15:06:32.652893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.273 [2024-07-22 15:06:32.652903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:107016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.273 [2024-07-22 15:06:32.652914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.273 [2024-07-22 15:06:32.652928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.273 [2024-07-22 15:06:32.652940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.273 [2024-07-22 15:06:32.652951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:107032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.273 [2024-07-22 15:06:32.652965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.273 [2024-07-22 15:06:32.652973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:107040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.273 [2024-07-22 15:06:32.652979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.273 [2024-07-22 15:06:32.652989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:107048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.273 [2024-07-22 15:06:32.652995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.273 [2024-07-22 15:06:32.653010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:107056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.273 [2024-07-22 15:06:32.653024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.273 [2024-07-22 15:06:32.653034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:107064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.273 [2024-07-22 15:06:32.653048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.273 [2024-07-22 15:06:32.653057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:107072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.273 [2024-07-22 15:06:32.653062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.273 [2024-07-22 15:06:32.653076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:107080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.273 [2024-07-22 15:06:32.653082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.273 [2024-07-22 15:06:32.653098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:107088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.273 [2024-07-22 15:06:32.653104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.273 [2024-07-22 15:06:32.653120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:107096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.273 [2024-07-22 15:06:32.653129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.273 [2024-07-22 15:06:32.653138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:107104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.273 [2024-07-22 15:06:32.653144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.273 [2024-07-22 15:06:32.653150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:107112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.273 [2024-07-22 15:06:32.653165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.273 [2024-07-22 15:06:32.653172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:107120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.273 [2024-07-22 15:06:32.653185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.273 [2024-07-22 15:06:32.653198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:107128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.273 [2024-07-22 15:06:32.653205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.273 [2024-07-22 15:06:32.653212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:107136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.273 [2024-07-22 15:06:32.653219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.273 [2024-07-22 15:06:32.653226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:107144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.273 [2024-07-22 15:06:32.653243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.273 [2024-07-22 15:06:32.653261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:107152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.273 [2024-07-22 15:06:32.653269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.273 [2024-07-22 15:06:32.653283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:107160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.273 [2024-07-22 15:06:32.653295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.273 [2024-07-22 15:06:32.653304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:107168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.273 [2024-07-22 15:06:32.653309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.273 [2024-07-22 15:06:32.653328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:107176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.273 [2024-07-22 15:06:32.653335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.273 [2024-07-22 15:06:32.653342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:107184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.274 [2024-07-22 15:06:32.653357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.274 [2024-07-22 15:06:32.653366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:107192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.274 [2024-07-22 15:06:32.653382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.274 [2024-07-22 15:06:32.653390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:107200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.274 [2024-07-22 15:06:32.653395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.274 [2024-07-22 15:06:32.653405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:107208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.274 [2024-07-22 15:06:32.653411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.274 [2024-07-22 15:06:32.653426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:107216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.274 [2024-07-22 15:06:32.653442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.274 [2024-07-22 15:06:32.653450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.274 [2024-07-22 15:06:32.653458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.274 [2024-07-22 15:06:32.653465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:107232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.274 [2024-07-22 15:06:32.653472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.274 [2024-07-22 15:06:32.653488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:107240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.274 [2024-07-22 15:06:32.653494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.274 [2024-07-22 15:06:32.653510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:107248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:13.274 [2024-07-22 15:06:32.653519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.274 [2024-07-22 15:06:32.653528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:107256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.274 [2024-07-22 15:06:32.653536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.274 [2024-07-22 15:06:32.653553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:107264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.274 [2024-07-22 15:06:32.653567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.274 [2024-07-22 15:06:32.653576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:107272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.274 [2024-07-22 15:06:32.653590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.274 [2024-07-22 15:06:32.653600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:107280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.274 [2024-07-22 15:06:32.653608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.274 [2024-07-22 15:06:32.653614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:107288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.274 [2024-07-22 15:06:32.653620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.274 [2024-07-22 15:06:32.653627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:107296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.274 [2024-07-22 15:06:32.653641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.274 [2024-07-22 15:06:32.653649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:107304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.274 [2024-07-22 15:06:32.653675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.274 [2024-07-22 15:06:32.653683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:107312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.274 [2024-07-22 15:06:32.653688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.274 [2024-07-22 15:06:32.653695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:107320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.274 [2024-07-22 15:06:32.653709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.274 [2024-07-22 15:06:32.653727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:107328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.274 [2024-07-22 15:06:32.653740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.274 [2024-07-22 15:06:32.653748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:107336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.274 [2024-07-22 15:06:32.653753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.274 [2024-07-22 15:06:32.653769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:107344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.274 [2024-07-22 15:06:32.653786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.274 [2024-07-22 15:06:32.653795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.274 [2024-07-22 15:06:32.653801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.274 [2024-07-22 15:06:32.653810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:107360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.274 [2024-07-22 15:06:32.653815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.274 [2024-07-22 15:06:32.653822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:107368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.274 [2024-07-22 15:06:32.653832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.274 [2024-07-22 15:06:32.653847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:107376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.274 [2024-07-22 15:06:32.653870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.274 [2024-07-22 15:06:32.653881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:107384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.274 [2024-07-22 15:06:32.653887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.274 [2024-07-22 15:06:32.653894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:107392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.274 [2024-07-22 15:06:32.653899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.274 [2024-07-22 15:06:32.653908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:107400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.274 [2024-07-22 15:06:32.653915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.274 [2024-07-22 15:06:32.653922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:107408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.274 [2024-07-22 15:06:32.653930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.274 [2024-07-22 15:06:32.653944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:107416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.274 [2024-07-22 15:06:32.653954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.274 [2024-07-22 15:06:32.653963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:107424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.274 [2024-07-22 15:06:32.653968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.274 [2024-07-22 15:06:32.653978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:107432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.274 [2024-07-22 15:06:32.653989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.274 [2024-07-22 15:06:32.653996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:107440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.274 [2024-07-22 15:06:32.654010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.274 [2024-07-22 15:06:32.654018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:107448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.274 [2024-07-22 15:06:32.654024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.274 [2024-07-22 15:06:32.654042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:107456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.274 [2024-07-22 15:06:32.654049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.274 [2024-07-22 15:06:32.654056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:107464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.274 [2024-07-22 15:06:32.654061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.274 [2024-07-22 15:06:32.654070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:107472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.274 [2024-07-22 15:06:32.654075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.274 [2024-07-22 15:06:32.654082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:107480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.274 [2024-07-22 15:06:32.654087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.274 [2024-07-22 15:06:32.654097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:107488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.274 [2024-07-22 15:06:32.654102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.274 [2024-07-22 15:06:32.654120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:107496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.274 [2024-07-22 15:06:32.654134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.274 [2024-07-22 15:06:32.654149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:107504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.275 [2024-07-22 15:06:32.654157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.275 [2024-07-22 15:06:32.654171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:107512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.275 [2024-07-22 15:06:32.654186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.275 [2024-07-22 15:06:32.654201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:107520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.275 [2024-07-22 15:06:32.654207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.275 [2024-07-22 15:06:32.654221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:107528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.275 [2024-07-22 15:06:32.654231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.275 [2024-07-22 15:06:32.654241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.275 [2024-07-22 15:06:32.654246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.275 [2024-07-22 15:06:32.654262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:107544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.275 [2024-07-22 15:06:32.654268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.275 [2024-07-22 15:06:32.654275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:107552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.275 [2024-07-22 15:06:32.654290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.275 [2024-07-22 15:06:32.654300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:107560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.275 [2024-07-22 15:06:32.654311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.275 [2024-07-22 15:06:32.654318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:107568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.275 [2024-07-22 15:06:32.654325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.275 [2024-07-22 15:06:32.654332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:107576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.275 [2024-07-22 15:06:32.654352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.275 [2024-07-22 15:06:32.654361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:107584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.275 [2024-07-22 15:06:32.654377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.275 [2024-07-22 15:06:32.654386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:107592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.275 [2024-07-22 15:06:32.654392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.275 [2024-07-22 15:06:32.654401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:107600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.275 [2024-07-22 15:06:32.654407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.275 [2024-07-22 15:06:32.654413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:107608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.275 [2024-07-22 15:06:32.654418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.275 [2024-07-22 15:06:32.654425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.275 [2024-07-22 15:06:32.654430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.275 [2024-07-22 15:06:32.654443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:107624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.275 [2024-07-22 15:06:32.654449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.275 [2024-07-22 15:06:32.654460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:107632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.275 [2024-07-22 15:06:32.654467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.275 [2024-07-22 15:06:32.654474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:107640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.275 [2024-07-22 15:06:32.654480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.275 [2024-07-22 15:06:32.654497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:107648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.275 [2024-07-22 15:06:32.654505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.275 [2024-07-22 15:06:32.654515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:107656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.275 [2024-07-22 15:06:32.654532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.275 [2024-07-22 15:06:32.654539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:107664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.275 [2024-07-22 15:06:32.654544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.275 [2024-07-22 15:06:32.654553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:107672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.275 [2024-07-22 15:06:32.654559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.275 [2024-07-22 15:06:32.654568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:107680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.275 [2024-07-22 15:06:32.654582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.275 [2024-07-22 15:06:32.654597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:107688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.275 [2024-07-22 15:06:32.654612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.275 [2024-07-22 15:06:32.654621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:107696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.275 [2024-07-22 15:06:32.654626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.275 [2024-07-22 15:06:32.654633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.275 [2024-07-22 15:06:32.654648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.275 [2024-07-22 15:06:32.654655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:107712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.275 [2024-07-22 15:06:32.654673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.275 [2024-07-22 15:06:32.654683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:107720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.275 [2024-07-22 15:06:32.654689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.275 [2024-07-22 15:06:32.654698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:107728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.275 [2024-07-22 15:06:32.654704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.275 [2024-07-22 15:06:32.654719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:107736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.275 [2024-07-22 15:06:32.654726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.275 [2024-07-22 15:06:32.654733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:107744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.275 [2024-07-22 15:06:32.654738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.275 [2024-07-22 15:06:32.654751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:107752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.275 [2024-07-22 15:06:32.654767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.275 [2024-07-22 15:06:32.654774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:107760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.275 [2024-07-22 15:06:32.654786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.275 [2024-07-22 15:06:32.654796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:107768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.275 [2024-07-22 15:06:32.654810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.275 [2024-07-22 15:06:32.654824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:107776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.275 [2024-07-22 15:06:32.654829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.275 [2024-07-22 15:06:32.654843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:107784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:13.275 [2024-07-22 15:06:32.654851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.275 [2024-07-22 15:06:32.654895] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:53:13.275 [2024-07-22 15:06:32.654907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107792 len:8 PRP1 0x0 PRP2 0x0 00:53:13.275 [2024-07-22 15:06:32.654913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.275 [2024-07-22 15:06:32.654923] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:53:13.275 [2024-07-22 15:06:32.654927] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:53:13.276 [2024-07-22 15:06:32.654932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107800 len:8 PRP1 0x0 PRP2 0x0 00:53:13.276 [2024-07-22 15:06:32.654937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.276 [2024-07-22 15:06:32.654954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:53:13.276 [2024-07-22 15:06:32.654958] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:53:13.276 [2024-07-22 15:06:32.654963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107808 len:8 PRP1 0x0 PRP2 0x0 00:53:13.276 [2024-07-22 15:06:32.654968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:13.276 [2024-07-22 15:06:32.655025] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22a8d70 was disconnected and freed. reset controller. 00:53:13.276 [2024-07-22 15:06:32.655235] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:53:13.276 [2024-07-22 15:06:32.655299] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2295aa0 (9): Bad file descriptor 00:53:13.276 [2024-07-22 15:06:32.655375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:13.276 [2024-07-22 15:06:32.655390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2295aa0 with addr=10.0.0.2, port=4420 00:53:13.276 [2024-07-22 15:06:32.655396] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2295aa0 is same with the state(5) to be set 00:53:13.276 [2024-07-22 15:06:32.655407] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2295aa0 (9): Bad file descriptor 00:53:13.276 [2024-07-22 15:06:32.655417] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:53:13.276 [2024-07-22 15:06:32.655426] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:53:13.276 [2024-07-22 15:06:32.655434] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:53:13.276 [2024-07-22 15:06:32.655458] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:53:13.276 [2024-07-22 15:06:32.655477] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:53:13.276 15:06:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:53:14.219 [2024-07-22 15:06:33.653687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:14.219 [2024-07-22 15:06:33.653754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2295aa0 with addr=10.0.0.2, port=4420 00:53:14.219 [2024-07-22 15:06:33.653765] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2295aa0 is same with the state(5) to be set 00:53:14.219 [2024-07-22 15:06:33.653783] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2295aa0 (9): Bad file descriptor 00:53:14.219 [2024-07-22 15:06:33.653794] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:53:14.219 [2024-07-22 15:06:33.653800] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:53:14.219 [2024-07-22 15:06:33.653807] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:53:14.219 [2024-07-22 15:06:33.653826] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:53:14.219 [2024-07-22 15:06:33.653833] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:53:14.219 15:06:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:53:14.478 [2024-07-22 15:06:33.850734] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:53:14.478 15:06:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 114263 00:53:15.047 [2024-07-22 15:06:34.664182] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:53:23.173 00:53:23.173 Latency(us) 00:53:23.173 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:53:23.173 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:53:23.173 Verification LBA range: start 0x0 length 0x4000 00:53:23.173 NVMe0n1 : 10.01 8519.08 33.28 0.00 0.00 14999.43 1445.23 3018433.62 00:53:23.173 =================================================================================================================== 00:53:23.173 Total : 8519.08 33.28 0.00 0.00 14999.43 1445.23 3018433.62 00:53:23.173 0 00:53:23.173 15:06:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:53:23.173 15:06:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=114380 00:53:23.173 15:06:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:53:23.173 Running I/O for 10 seconds... 00:53:23.173 15:06:42 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:53:23.173 [2024-07-22 15:06:42.737819] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816c60 is same with the state(5) to be set 00:53:23.173 [2024-07-22 15:06:42.737869] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816c60 is same with the state(5) to be set 00:53:23.173 [2024-07-22 15:06:42.737875] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816c60 is same with the state(5) to be set 00:53:23.173 [2024-07-22 15:06:42.737880] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816c60 is same with the state(5) to be set 00:53:23.173 [2024-07-22 15:06:42.737885] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816c60 is same with the state(5) to be set 00:53:23.173 [2024-07-22 15:06:42.737890] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816c60 is same with the state(5) to be set 00:53:23.173 [2024-07-22 15:06:42.737894] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816c60 is same with the state(5) to be set 00:53:23.173 [2024-07-22 15:06:42.737899] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816c60 is same with the state(5) to be set 00:53:23.173 [2024-07-22 15:06:42.737915] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816c60 is same with the state(5) to be set 00:53:23.173 [2024-07-22 15:06:42.737920] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816c60 is same with the state(5) to be set 00:53:23.173 [2024-07-22 15:06:42.737924] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816c60 is same with the state(5) to be set 00:53:23.173 [2024-07-22 15:06:42.737929] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816c60 is same with the state(5) to be set 00:53:23.173 [2024-07-22 15:06:42.737933] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816c60 is same with the state(5) to be set 00:53:23.173 [2024-07-22 15:06:42.737937] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816c60 is same with the state(5) to be set 00:53:23.173 [2024-07-22 15:06:42.737941] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816c60 is same with the state(5) to be set 00:53:23.173 [2024-07-22 15:06:42.737946] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816c60 is same with the state(5) to be set 00:53:23.174 [2024-07-22 15:06:42.737950] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816c60 is same with the state(5) to be set 00:53:23.174 [2024-07-22 15:06:42.737954] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816c60 is same with the state(5) to be set 00:53:23.174 [2024-07-22 15:06:42.737958] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816c60 is same with the state(5) to be set 00:53:23.174 [2024-07-22 15:06:42.737963] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816c60 is same with the state(5) to be set 00:53:23.174 [2024-07-22 15:06:42.737967] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816c60 is same with the state(5) to be set 00:53:23.174 [2024-07-22 15:06:42.737972] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816c60 is same with the state(5) to be set 00:53:23.174 [2024-07-22 15:06:42.737976] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816c60 is same with the state(5) to be set 00:53:23.174 [2024-07-22 15:06:42.737980] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816c60 is same with the state(5) to be set 00:53:23.174 [2024-07-22 15:06:42.737984] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816c60 is same with the state(5) to be set 00:53:23.174 [2024-07-22 15:06:42.737988] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816c60 is same with the state(5) to be set 00:53:23.174 [2024-07-22 15:06:42.737993] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816c60 is same with the state(5) to be set 00:53:23.174 [2024-07-22 15:06:42.737997] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816c60 is same with the state(5) to be set 00:53:23.174 [2024-07-22 15:06:42.738002] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816c60 is same with the state(5) to be set 00:53:23.174 [2024-07-22 15:06:42.738006] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816c60 is same with the state(5) to be set 00:53:23.174 [2024-07-22 15:06:42.738010] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816c60 is same with the state(5) to be set 00:53:23.174 [2024-07-22 15:06:42.738015] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816c60 is same with the state(5) to be set 00:53:23.174 [2024-07-22 15:06:42.738019] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816c60 is same with the state(5) to be set 00:53:23.174 [2024-07-22 15:06:42.738023] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816c60 is same with the state(5) to be set 00:53:23.174 [2024-07-22 15:06:42.738028] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816c60 is same with the state(5) to be set 00:53:23.174 [2024-07-22 15:06:42.738033] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816c60 is same with the state(5) to be set 00:53:23.174 [2024-07-22 15:06:42.738037] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816c60 is same with the state(5) to be set 00:53:23.174 [2024-07-22 15:06:42.738041] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816c60 is same with the state(5) to be set 00:53:23.174 [2024-07-22 15:06:42.738045] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816c60 is same with the state(5) to be set 00:53:23.174 [2024-07-22 15:06:42.738050] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816c60 is same with the state(5) to be set 00:53:23.174 [2024-07-22 15:06:42.738054] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816c60 is same with the state(5) to be set 00:53:23.174 [2024-07-22 15:06:42.738058] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816c60 is same with the state(5) to be set 00:53:23.174 [2024-07-22 15:06:42.738068] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816c60 is same with the state(5) to be set 00:53:23.174 [2024-07-22 15:06:42.738073] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816c60 is same with the state(5) to be set 00:53:23.174 [2024-07-22 15:06:42.738077] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816c60 is same with the state(5) to be set 00:53:23.174 [2024-07-22 15:06:42.738081] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816c60 is same with the state(5) to be set 00:53:23.174 [2024-07-22 15:06:42.738085] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816c60 is same with the state(5) to be set 00:53:23.174 [2024-07-22 15:06:42.738090] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816c60 is same with the state(5) to be set 00:53:23.174 [2024-07-22 15:06:42.738094] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816c60 is same with the state(5) to be set 00:53:23.174 [2024-07-22 15:06:42.738098] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816c60 is same with the state(5) to be set 00:53:23.174 [2024-07-22 15:06:42.738103] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816c60 is same with the state(5) to be set 00:53:23.174 [2024-07-22 15:06:42.738107] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816c60 is same with the state(5) to be set 00:53:23.174 [2024-07-22 15:06:42.738112] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816c60 is same with the state(5) to be set 00:53:23.174 [2024-07-22 15:06:42.738116] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x816c60 is same with the state(5) to be set 00:53:23.174 [2024-07-22 15:06:42.738832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:106096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.174 [2024-07-22 15:06:42.738866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.174 [2024-07-22 15:06:42.738882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:106104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.174 [2024-07-22 15:06:42.738888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.174 [2024-07-22 15:06:42.738895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:106112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.174 [2024-07-22 15:06:42.738901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.174 [2024-07-22 15:06:42.738908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:106120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.174 [2024-07-22 15:06:42.738914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.174 [2024-07-22 15:06:42.738920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:106128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.174 [2024-07-22 15:06:42.738927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.174 [2024-07-22 15:06:42.738934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:106136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.174 [2024-07-22 15:06:42.738939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.174 [2024-07-22 15:06:42.738946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:106144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.174 [2024-07-22 15:06:42.738951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.174 [2024-07-22 15:06:42.738957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:106152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.174 [2024-07-22 15:06:42.738962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.174 [2024-07-22 15:06:42.738969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:106160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.174 [2024-07-22 15:06:42.738974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.174 [2024-07-22 15:06:42.738981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:106168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.174 [2024-07-22 15:06:42.738990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.174 [2024-07-22 15:06:42.738996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:106176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.174 [2024-07-22 15:06:42.739002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.174 [2024-07-22 15:06:42.739008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:106184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.174 [2024-07-22 15:06:42.739013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.174 [2024-07-22 15:06:42.739022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:106192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.174 [2024-07-22 15:06:42.739027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.174 [2024-07-22 15:06:42.739034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:106200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.174 [2024-07-22 15:06:42.739040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.174 [2024-07-22 15:06:42.739046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:106208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.174 [2024-07-22 15:06:42.739051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.174 [2024-07-22 15:06:42.739058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:106216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.174 [2024-07-22 15:06:42.739063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.174 [2024-07-22 15:06:42.739070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:106224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.174 [2024-07-22 15:06:42.739077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.174 [2024-07-22 15:06:42.739087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:106232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.174 [2024-07-22 15:06:42.739092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.174 [2024-07-22 15:06:42.739098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.174 [2024-07-22 15:06:42.739104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.174 [2024-07-22 15:06:42.739111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:106248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.174 [2024-07-22 15:06:42.739117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.174 [2024-07-22 15:06:42.739125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:106256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.174 [2024-07-22 15:06:42.739131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.174 [2024-07-22 15:06:42.739138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:106768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:23.175 [2024-07-22 15:06:42.739144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.175 [2024-07-22 15:06:42.739151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:106776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:23.175 [2024-07-22 15:06:42.739156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.175 [2024-07-22 15:06:42.739162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:106784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:23.175 [2024-07-22 15:06:42.739167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.175 [2024-07-22 15:06:42.739177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:106792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:23.175 [2024-07-22 15:06:42.739182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.175 [2024-07-22 15:06:42.739189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:106800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:23.175 [2024-07-22 15:06:42.739194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.175 [2024-07-22 15:06:42.739201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:106808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:23.175 [2024-07-22 15:06:42.739206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.175 [2024-07-22 15:06:42.739215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:106816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:23.175 [2024-07-22 15:06:42.739220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.175 [2024-07-22 15:06:42.739227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:106824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:23.175 [2024-07-22 15:06:42.739232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.175 [2024-07-22 15:06:42.739239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:106832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:23.175 [2024-07-22 15:06:42.739244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.175 [2024-07-22 15:06:42.739251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:106840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:23.175 [2024-07-22 15:06:42.739256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.175 [2024-07-22 15:06:42.739265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:106264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.175 [2024-07-22 15:06:42.739270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.175 [2024-07-22 15:06:42.739277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:106272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.175 [2024-07-22 15:06:42.739282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.175 [2024-07-22 15:06:42.739290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:106280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.175 [2024-07-22 15:06:42.739297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.175 [2024-07-22 15:06:42.739304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:106288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.175 [2024-07-22 15:06:42.739309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.175 [2024-07-22 15:06:42.739316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:106296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.175 [2024-07-22 15:06:42.739322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.175 [2024-07-22 15:06:42.739329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:106304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.175 [2024-07-22 15:06:42.739337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.175 [2024-07-22 15:06:42.739344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:106312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.175 [2024-07-22 15:06:42.739349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.175 [2024-07-22 15:06:42.739356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:106320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.175 [2024-07-22 15:06:42.739361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.175 [2024-07-22 15:06:42.739368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:106328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.175 [2024-07-22 15:06:42.739375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.175 [2024-07-22 15:06:42.739383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:106336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.175 [2024-07-22 15:06:42.739388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.175 [2024-07-22 15:06:42.739395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:106344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.175 [2024-07-22 15:06:42.739400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.175 [2024-07-22 15:06:42.739407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:106352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.175 [2024-07-22 15:06:42.739413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.175 [2024-07-22 15:06:42.739422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:106360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.175 [2024-07-22 15:06:42.739428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.175 [2024-07-22 15:06:42.739435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:106368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.175 [2024-07-22 15:06:42.739440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.175 [2024-07-22 15:06:42.739447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:106376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.175 [2024-07-22 15:06:42.739452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.175 [2024-07-22 15:06:42.739461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:106384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.175 [2024-07-22 15:06:42.739467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.175 [2024-07-22 15:06:42.739474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:106392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.175 [2024-07-22 15:06:42.739479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.175 [2024-07-22 15:06:42.739486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:106400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.175 [2024-07-22 15:06:42.739491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.175 [2024-07-22 15:06:42.739498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.175 [2024-07-22 15:06:42.739503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.175 [2024-07-22 15:06:42.739510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:106416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.175 [2024-07-22 15:06:42.739518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.175 [2024-07-22 15:06:42.739525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:106424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.175 [2024-07-22 15:06:42.739531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.175 [2024-07-22 15:06:42.739537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:106432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.175 [2024-07-22 15:06:42.739543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.175 [2024-07-22 15:06:42.739550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:106440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.175 [2024-07-22 15:06:42.739557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.175 [2024-07-22 15:06:42.739564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:106448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.175 [2024-07-22 15:06:42.739569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.175 [2024-07-22 15:06:42.739575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:106456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.175 [2024-07-22 15:06:42.739581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.175 [2024-07-22 15:06:42.739588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:106464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.175 [2024-07-22 15:06:42.739593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.175 [2024-07-22 15:06:42.739601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:106472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.175 [2024-07-22 15:06:42.739607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.175 [2024-07-22 15:06:42.739614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:106480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.175 [2024-07-22 15:06:42.739619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.175 [2024-07-22 15:06:42.739626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:106488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.175 [2024-07-22 15:06:42.739631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.175 [2024-07-22 15:06:42.739640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:106496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.175 [2024-07-22 15:06:42.739647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.176 [2024-07-22 15:06:42.739653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:106504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.176 [2024-07-22 15:06:42.739659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.176 [2024-07-22 15:06:42.739665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:106512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.176 [2024-07-22 15:06:42.739679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.176 [2024-07-22 15:06:42.739685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:106520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.176 [2024-07-22 15:06:42.739693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.176 [2024-07-22 15:06:42.739700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:106528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.176 [2024-07-22 15:06:42.739706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.176 [2024-07-22 15:06:42.739712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:106536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.176 [2024-07-22 15:06:42.739717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.176 [2024-07-22 15:06:42.739725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:106544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.176 [2024-07-22 15:06:42.739730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.176 [2024-07-22 15:06:42.739736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:106552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.176 [2024-07-22 15:06:42.739741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.176 [2024-07-22 15:06:42.739748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:106560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.176 [2024-07-22 15:06:42.739753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.176 [2024-07-22 15:06:42.739762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:106568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.176 [2024-07-22 15:06:42.739768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.176 [2024-07-22 15:06:42.739775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:106576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.176 [2024-07-22 15:06:42.739781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.176 [2024-07-22 15:06:42.739788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:106584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.176 [2024-07-22 15:06:42.739793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.176 [2024-07-22 15:06:42.739800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:106592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.176 [2024-07-22 15:06:42.739807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.176 [2024-07-22 15:06:42.739814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:106600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.176 [2024-07-22 15:06:42.739820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.176 [2024-07-22 15:06:42.739826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:106608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.176 [2024-07-22 15:06:42.739832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.176 [2024-07-22 15:06:42.739838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:106616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.176 [2024-07-22 15:06:42.739846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.176 [2024-07-22 15:06:42.739854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:106624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.176 [2024-07-22 15:06:42.739860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.176 [2024-07-22 15:06:42.739866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:106632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.176 [2024-07-22 15:06:42.739872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.176 [2024-07-22 15:06:42.739878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:106640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.176 [2024-07-22 15:06:42.739883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.176 [2024-07-22 15:06:42.739892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.176 [2024-07-22 15:06:42.739898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.176 [2024-07-22 15:06:42.739904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:106656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.176 [2024-07-22 15:06:42.739910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.176 [2024-07-22 15:06:42.739917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:106664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.176 [2024-07-22 15:06:42.739922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.176 [2024-07-22 15:06:42.739931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:106672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.176 [2024-07-22 15:06:42.739937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.176 [2024-07-22 15:06:42.739943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:106680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.176 [2024-07-22 15:06:42.739949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.176 [2024-07-22 15:06:42.739956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:106688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.176 [2024-07-22 15:06:42.739961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.176 [2024-07-22 15:06:42.739968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:106696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.176 [2024-07-22 15:06:42.739975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.176 [2024-07-22 15:06:42.739983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:106704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.176 [2024-07-22 15:06:42.739989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.176 [2024-07-22 15:06:42.739995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:106712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.176 [2024-07-22 15:06:42.740001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.176 [2024-07-22 15:06:42.740007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:106720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.176 [2024-07-22 15:06:42.740015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.176 [2024-07-22 15:06:42.740022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:106728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.176 [2024-07-22 15:06:42.740027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.176 [2024-07-22 15:06:42.740034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:106736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.176 [2024-07-22 15:06:42.740039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.176 [2024-07-22 15:06:42.740046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:106744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.176 [2024-07-22 15:06:42.740052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.176 [2024-07-22 15:06:42.740058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:106752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.176 [2024-07-22 15:06:42.740066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.176 [2024-07-22 15:06:42.740073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:106760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:23.176 [2024-07-22 15:06:42.740078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.176 [2024-07-22 15:06:42.740085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:106848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:23.176 [2024-07-22 15:06:42.740090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.176 [2024-07-22 15:06:42.740099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:106856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:23.176 [2024-07-22 15:06:42.740104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.176 [2024-07-22 15:06:42.740111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:106864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:23.176 [2024-07-22 15:06:42.740120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.176 [2024-07-22 15:06:42.740127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:106872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:23.176 [2024-07-22 15:06:42.740133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.176 [2024-07-22 15:06:42.740140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:106880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:23.176 [2024-07-22 15:06:42.740147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.176 [2024-07-22 15:06:42.740154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:23.176 [2024-07-22 15:06:42.740159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.177 [2024-07-22 15:06:42.740166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:106896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:23.177 [2024-07-22 15:06:42.740171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.177 [2024-07-22 15:06:42.740178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:106904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:23.177 [2024-07-22 15:06:42.740186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.177 [2024-07-22 15:06:42.740193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:106912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:23.177 [2024-07-22 15:06:42.740198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.177 [2024-07-22 15:06:42.740205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:106920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:23.177 [2024-07-22 15:06:42.740210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.177 [2024-07-22 15:06:42.740217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:106928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:23.177 [2024-07-22 15:06:42.740222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.177 [2024-07-22 15:06:42.740231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:106936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:23.177 [2024-07-22 15:06:42.740236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.177 [2024-07-22 15:06:42.740242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:106944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:23.177 [2024-07-22 15:06:42.740248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.177 [2024-07-22 15:06:42.740254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:106952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:23.177 [2024-07-22 15:06:42.740260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.177 [2024-07-22 15:06:42.740269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:106960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:23.177 [2024-07-22 15:06:42.740274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.177 [2024-07-22 15:06:42.740280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:106968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:23.177 [2024-07-22 15:06:42.740286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.177 [2024-07-22 15:06:42.740292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:106976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:23.177 [2024-07-22 15:06:42.740297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.177 [2024-07-22 15:06:42.740304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:106984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:23.177 [2024-07-22 15:06:42.740312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.177 [2024-07-22 15:06:42.740319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:106992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:23.177 [2024-07-22 15:06:42.740326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.177 [2024-07-22 15:06:42.740332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:107000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:23.177 [2024-07-22 15:06:42.740338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.177 [2024-07-22 15:06:42.740344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:107008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:23.177 [2024-07-22 15:06:42.740352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.177 [2024-07-22 15:06:42.740359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:107016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:23.177 [2024-07-22 15:06:42.740364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.177 [2024-07-22 15:06:42.740371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:107024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:23.177 [2024-07-22 15:06:42.740376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.177 [2024-07-22 15:06:42.740383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:107032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:23.177 [2024-07-22 15:06:42.740388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.177 [2024-07-22 15:06:42.740397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:107040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:23.177 [2024-07-22 15:06:42.740402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.177 [2024-07-22 15:06:42.740409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:107048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:23.177 [2024-07-22 15:06:42.740414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.177 [2024-07-22 15:06:42.740420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:107056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:23.177 [2024-07-22 15:06:42.740427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.177 [2024-07-22 15:06:42.740434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:107064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:23.177 [2024-07-22 15:06:42.740440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.177 [2024-07-22 15:06:42.740446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:107072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:23.177 [2024-07-22 15:06:42.740451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.177 [2024-07-22 15:06:42.740458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:107080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:23.177 [2024-07-22 15:06:42.740463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.177 [2024-07-22 15:06:42.740472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:107088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:23.177 [2024-07-22 15:06:42.740478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.177 [2024-07-22 15:06:42.740485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:107096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:53:23.177 [2024-07-22 15:06:42.740490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.177 [2024-07-22 15:06:42.740510] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:53:23.177 [2024-07-22 15:06:42.740516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107104 len:8 PRP1 0x0 PRP2 0x0 00:53:23.177 [2024-07-22 15:06:42.740535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.177 [2024-07-22 15:06:42.740545] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:53:23.177 [2024-07-22 15:06:42.740561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:53:23.177 [2024-07-22 15:06:42.740568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107112 len:8 PRP1 0x0 PRP2 0x0 00:53:23.177 [2024-07-22 15:06:42.740573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:23.177 [2024-07-22 15:06:42.740638] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22d8e00 was disconnected and freed. reset controller. 00:53:23.177 [2024-07-22 15:06:42.740847] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:53:23.177 [2024-07-22 15:06:42.740906] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2295aa0 (9): Bad file descriptor 00:53:23.177 [2024-07-22 15:06:42.740984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:23.177 [2024-07-22 15:06:42.740994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2295aa0 with addr=10.0.0.2, port=4420 00:53:23.177 [2024-07-22 15:06:42.741003] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2295aa0 is same with the state(5) to be set 00:53:23.177 [2024-07-22 15:06:42.741013] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2295aa0 (9): Bad file descriptor 00:53:23.177 [2024-07-22 15:06:42.741023] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:53:23.177 [2024-07-22 15:06:42.741028] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:53:23.177 [2024-07-22 15:06:42.741035] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:53:23.177 [2024-07-22 15:06:42.741051] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:53:23.177 [2024-07-22 15:06:42.741058] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:53:23.177 15:06:42 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:53:24.116 [2024-07-22 15:06:43.739238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:24.116 [2024-07-22 15:06:43.739312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2295aa0 with addr=10.0.0.2, port=4420 00:53:24.116 [2024-07-22 15:06:43.739322] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2295aa0 is same with the state(5) to be set 00:53:24.116 [2024-07-22 15:06:43.739339] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2295aa0 (9): Bad file descriptor 00:53:24.116 [2024-07-22 15:06:43.739349] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:53:24.116 [2024-07-22 15:06:43.739356] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:53:24.116 [2024-07-22 15:06:43.739363] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:53:24.116 [2024-07-22 15:06:43.739381] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:53:24.116 [2024-07-22 15:06:43.739386] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:53:25.496 [2024-07-22 15:06:44.737571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:25.496 [2024-07-22 15:06:44.737628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2295aa0 with addr=10.0.0.2, port=4420 00:53:25.496 [2024-07-22 15:06:44.737636] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2295aa0 is same with the state(5) to be set 00:53:25.496 [2024-07-22 15:06:44.737668] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2295aa0 (9): Bad file descriptor 00:53:25.496 [2024-07-22 15:06:44.737680] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:53:25.496 [2024-07-22 15:06:44.737693] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:53:25.496 [2024-07-22 15:06:44.737701] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:53:25.496 [2024-07-22 15:06:44.737718] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:53:25.496 [2024-07-22 15:06:44.737725] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:53:26.432 [2024-07-22 15:06:45.738453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:26.432 [2024-07-22 15:06:45.738513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2295aa0 with addr=10.0.0.2, port=4420 00:53:26.432 [2024-07-22 15:06:45.738522] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2295aa0 is same with the state(5) to be set 00:53:26.432 [2024-07-22 15:06:45.738707] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2295aa0 (9): Bad file descriptor 00:53:26.432 [2024-07-22 15:06:45.738902] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:53:26.432 [2024-07-22 15:06:45.738915] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:53:26.432 [2024-07-22 15:06:45.738949] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:53:26.432 [2024-07-22 15:06:45.741774] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:53:26.432 [2024-07-22 15:06:45.741803] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:53:26.432 15:06:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:53:26.432 [2024-07-22 15:06:45.939914] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:53:26.432 15:06:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 114380 00:53:27.369 [2024-07-22 15:06:46.767212] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:53:32.654 00:53:32.654 Latency(us) 00:53:32.654 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:53:32.654 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:53:32.654 Verification LBA range: start 0x0 length 0x4000 00:53:32.654 NVMe0n1 : 10.01 7023.69 27.44 5263.34 0.00 10398.60 456.10 3018433.62 00:53:32.654 =================================================================================================================== 00:53:32.654 Total : 7023.69 27.44 5263.34 0.00 10398.60 0.00 3018433.62 00:53:32.654 0 00:53:32.654 15:06:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 114221 00:53:32.654 15:06:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@946 -- # '[' -z 114221 ']' 00:53:32.654 15:06:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@950 -- # kill -0 114221 00:53:32.654 15:06:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # uname 00:53:32.654 15:06:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:53:32.654 15:06:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 114221 00:53:32.654 15:06:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:53:32.654 15:06:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:53:32.654 killing process with pid 114221 00:53:32.654 Received shutdown signal, test time was about 10.000000 seconds 00:53:32.654 00:53:32.654 Latency(us) 00:53:32.654 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:53:32.654 =================================================================================================================== 00:53:32.654 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:53:32.654 15:06:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 114221' 00:53:32.654 15:06:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # kill 114221 00:53:32.654 15:06:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@970 -- # wait 114221 00:53:32.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:53:32.654 15:06:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:53:32.654 15:06:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=114506 00:53:32.654 15:06:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 114506 /var/tmp/bdevperf.sock 00:53:32.654 15:06:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@827 -- # '[' -z 114506 ']' 00:53:32.654 15:06:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:53:32.654 15:06:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:53:32.654 15:06:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:53:32.654 15:06:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:53:32.654 15:06:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:53:32.654 [2024-07-22 15:06:51.891084] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:53:32.654 [2024-07-22 15:06:51.891150] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114506 ] 00:53:32.654 [2024-07-22 15:06:52.021336] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:53:32.654 [2024-07-22 15:06:52.069833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:53:33.222 15:06:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:53:33.222 15:06:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@860 -- # return 0 00:53:33.222 15:06:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 114506 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:53:33.222 15:06:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=114533 00:53:33.222 15:06:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:53:33.508 15:06:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:53:33.769 NVMe0n1 00:53:33.769 15:06:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:53:33.769 15:06:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=114583 00:53:33.769 15:06:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:53:33.769 Running I/O for 10 seconds... 00:53:34.707 15:06:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:53:34.972 [2024-07-22 15:06:54.379137] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379176] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379182] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379203] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379208] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379224] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379229] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379234] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379238] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379242] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379246] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379250] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379254] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379258] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379263] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379267] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379272] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379276] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379280] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379284] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379288] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379292] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379296] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379301] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379305] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379309] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379313] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379318] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379322] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379326] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379330] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379334] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379338] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379344] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379350] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379355] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379359] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379363] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379367] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379372] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379376] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379381] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379385] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379389] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379393] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379397] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379402] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379406] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379410] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379414] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379418] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379423] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379427] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379431] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379435] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379440] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379444] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379448] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379453] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379457] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.972 [2024-07-22 15:06:54.379461] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379465] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379470] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379475] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379479] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379484] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379489] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379493] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379497] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379501] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379506] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379510] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379515] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379519] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379523] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379528] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379532] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379536] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379541] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379545] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379550] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379554] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379558] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379562] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379567] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379571] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379577] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379581] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379585] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379590] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379594] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379598] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379602] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379606] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379611] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379615] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379619] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379623] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379628] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379633] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379637] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379641] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379645] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379650] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379655] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379659] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379663] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379668] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379672] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379676] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379680] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379684] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379688] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379692] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379706] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379711] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379715] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379720] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379724] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379728] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379734] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379738] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379743] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379747] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379752] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.379756] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819700 is same with the state(5) to be set 00:53:34.973 [2024-07-22 15:06:54.380297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:32792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.973 [2024-07-22 15:06:54.380323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.973 [2024-07-22 15:06:54.380339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:114976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.974 [2024-07-22 15:06:54.380345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.974 [2024-07-22 15:06:54.380352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:111288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.974 [2024-07-22 15:06:54.380358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.974 [2024-07-22 15:06:54.380365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:88320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.974 [2024-07-22 15:06:54.380370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.974 [2024-07-22 15:06:54.380376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:62080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.974 [2024-07-22 15:06:54.380382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.974 [2024-07-22 15:06:54.380389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:35112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.974 [2024-07-22 15:06:54.380394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.974 [2024-07-22 15:06:54.380400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:121544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.974 [2024-07-22 15:06:54.380405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.974 [2024-07-22 15:06:54.380412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:122376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.974 [2024-07-22 15:06:54.380417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.974 [2024-07-22 15:06:54.380423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:106576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.974 [2024-07-22 15:06:54.380428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.974 [2024-07-22 15:06:54.380434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:78512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.974 [2024-07-22 15:06:54.380439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.974 [2024-07-22 15:06:54.380446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:127376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.974 [2024-07-22 15:06:54.380451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.974 [2024-07-22 15:06:54.380457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:36160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.974 [2024-07-22 15:06:54.380462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.974 [2024-07-22 15:06:54.380468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:61312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.974 [2024-07-22 15:06:54.380473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.974 [2024-07-22 15:06:54.380479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:119696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.974 [2024-07-22 15:06:54.380484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.974 [2024-07-22 15:06:54.380490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:48584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.974 [2024-07-22 15:06:54.380495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.974 [2024-07-22 15:06:54.380502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:30560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.974 [2024-07-22 15:06:54.380507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.974 [2024-07-22 15:06:54.380514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:76688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.974 [2024-07-22 15:06:54.380519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.974 [2024-07-22 15:06:54.380527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:129840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.974 [2024-07-22 15:06:54.380532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.974 [2024-07-22 15:06:54.380539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:111544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.974 [2024-07-22 15:06:54.380544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.974 [2024-07-22 15:06:54.380551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.974 [2024-07-22 15:06:54.380555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.974 [2024-07-22 15:06:54.380562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:46472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.974 [2024-07-22 15:06:54.380567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.974 [2024-07-22 15:06:54.380573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:115320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.974 [2024-07-22 15:06:54.380578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.974 [2024-07-22 15:06:54.380594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:34072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.974 [2024-07-22 15:06:54.380599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.974 [2024-07-22 15:06:54.380606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.974 [2024-07-22 15:06:54.380611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.974 [2024-07-22 15:06:54.380617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:103408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.974 [2024-07-22 15:06:54.380622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.974 [2024-07-22 15:06:54.380629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:56264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.974 [2024-07-22 15:06:54.380634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.974 [2024-07-22 15:06:54.380640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.974 [2024-07-22 15:06:54.380645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.974 [2024-07-22 15:06:54.380652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:127960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.974 [2024-07-22 15:06:54.380657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.974 [2024-07-22 15:06:54.380663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:25280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.974 [2024-07-22 15:06:54.380676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.974 [2024-07-22 15:06:54.380683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:49920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.974 [2024-07-22 15:06:54.380689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.974 [2024-07-22 15:06:54.380696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.974 [2024-07-22 15:06:54.380701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.974 [2024-07-22 15:06:54.380707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:130520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.974 [2024-07-22 15:06:54.380712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.974 [2024-07-22 15:06:54.380719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.974 [2024-07-22 15:06:54.380723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.974 [2024-07-22 15:06:54.380731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:3480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.974 [2024-07-22 15:06:54.380736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.975 [2024-07-22 15:06:54.380742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:129624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.975 [2024-07-22 15:06:54.380747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.975 [2024-07-22 15:06:54.380754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:76912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.975 [2024-07-22 15:06:54.380759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.975 [2024-07-22 15:06:54.380765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:52560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.975 [2024-07-22 15:06:54.380770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.975 [2024-07-22 15:06:54.380776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:121432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.975 [2024-07-22 15:06:54.380781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.975 [2024-07-22 15:06:54.380788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.975 [2024-07-22 15:06:54.380793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.975 [2024-07-22 15:06:54.380800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.975 [2024-07-22 15:06:54.380805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.975 [2024-07-22 15:06:54.380811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:72840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.975 [2024-07-22 15:06:54.380817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.975 [2024-07-22 15:06:54.380823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.975 [2024-07-22 15:06:54.380828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.975 [2024-07-22 15:06:54.380835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:55064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.975 [2024-07-22 15:06:54.380840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.975 [2024-07-22 15:06:54.380846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.975 [2024-07-22 15:06:54.380851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.975 [2024-07-22 15:06:54.380857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.975 [2024-07-22 15:06:54.380862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.975 [2024-07-22 15:06:54.380868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:42384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.975 [2024-07-22 15:06:54.380873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.975 [2024-07-22 15:06:54.380879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:63088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.975 [2024-07-22 15:06:54.380884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.975 [2024-07-22 15:06:54.380891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:52112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.975 [2024-07-22 15:06:54.380896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.975 [2024-07-22 15:06:54.380902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:71952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.975 [2024-07-22 15:06:54.380907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.975 [2024-07-22 15:06:54.380914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.975 [2024-07-22 15:06:54.380919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.975 [2024-07-22 15:06:54.380925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:124832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.975 [2024-07-22 15:06:54.380930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.975 [2024-07-22 15:06:54.380936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:27608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.975 [2024-07-22 15:06:54.380941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.975 [2024-07-22 15:06:54.380947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.975 [2024-07-22 15:06:54.380952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.975 [2024-07-22 15:06:54.380958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.975 [2024-07-22 15:06:54.380964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.975 [2024-07-22 15:06:54.380971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:67696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.975 [2024-07-22 15:06:54.380976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.975 [2024-07-22 15:06:54.380983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:61416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.975 [2024-07-22 15:06:54.380988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.975 [2024-07-22 15:06:54.380994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.975 [2024-07-22 15:06:54.380999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.975 [2024-07-22 15:06:54.381005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:77176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.975 [2024-07-22 15:06:54.381010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.975 [2024-07-22 15:06:54.381016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:91152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.975 [2024-07-22 15:06:54.381021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.975 [2024-07-22 15:06:54.381028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:100576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.975 [2024-07-22 15:06:54.381033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.975 [2024-07-22 15:06:54.381039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:49696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.975 [2024-07-22 15:06:54.381044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.975 [2024-07-22 15:06:54.381050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:74800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.975 [2024-07-22 15:06:54.381056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.975 [2024-07-22 15:06:54.381062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:33632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.975 [2024-07-22 15:06:54.381067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.975 [2024-07-22 15:06:54.381073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:63856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.975 [2024-07-22 15:06:54.381078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.975 [2024-07-22 15:06:54.381084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.975 [2024-07-22 15:06:54.381090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.975 [2024-07-22 15:06:54.381096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:65368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.975 [2024-07-22 15:06:54.381101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.975 [2024-07-22 15:06:54.381107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:122000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.976 [2024-07-22 15:06:54.381112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.976 [2024-07-22 15:06:54.381119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:116152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.976 [2024-07-22 15:06:54.381123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.976 [2024-07-22 15:06:54.381130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:48944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.976 [2024-07-22 15:06:54.381135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.976 [2024-07-22 15:06:54.381141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:125400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.976 [2024-07-22 15:06:54.381146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.976 [2024-07-22 15:06:54.381153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.976 [2024-07-22 15:06:54.381158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.976 [2024-07-22 15:06:54.381164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:74576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.976 [2024-07-22 15:06:54.381169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.976 [2024-07-22 15:06:54.381175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:20712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.976 [2024-07-22 15:06:54.381180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.976 [2024-07-22 15:06:54.381186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:124944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.976 [2024-07-22 15:06:54.381191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.976 [2024-07-22 15:06:54.381197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:129328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.976 [2024-07-22 15:06:54.381202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.976 [2024-07-22 15:06:54.381209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:100704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.976 [2024-07-22 15:06:54.381214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.976 [2024-07-22 15:06:54.381220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:110640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.976 [2024-07-22 15:06:54.381225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.976 [2024-07-22 15:06:54.381231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.976 [2024-07-22 15:06:54.381236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.976 [2024-07-22 15:06:54.381243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:89264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.976 [2024-07-22 15:06:54.381247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.976 [2024-07-22 15:06:54.381254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.976 [2024-07-22 15:06:54.381259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.976 [2024-07-22 15:06:54.381265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:65712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.976 [2024-07-22 15:06:54.381270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.976 [2024-07-22 15:06:54.381276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.976 [2024-07-22 15:06:54.381281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.976 [2024-07-22 15:06:54.381288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.976 [2024-07-22 15:06:54.381293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.976 [2024-07-22 15:06:54.381299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:94984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.976 [2024-07-22 15:06:54.381304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.976 [2024-07-22 15:06:54.381310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:23376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.976 [2024-07-22 15:06:54.381315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.976 [2024-07-22 15:06:54.381321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:127872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.976 [2024-07-22 15:06:54.381326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.976 [2024-07-22 15:06:54.381332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:88904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.976 [2024-07-22 15:06:54.381337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.976 [2024-07-22 15:06:54.381343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:31840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.976 [2024-07-22 15:06:54.381348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.976 [2024-07-22 15:06:54.381354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.976 [2024-07-22 15:06:54.381359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.976 [2024-07-22 15:06:54.381366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.976 [2024-07-22 15:06:54.381371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.976 [2024-07-22 15:06:54.381377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:30584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.976 [2024-07-22 15:06:54.381382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.976 [2024-07-22 15:06:54.381388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:129480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.976 [2024-07-22 15:06:54.381397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.976 [2024-07-22 15:06:54.381404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:73360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:53:34.976 [2024-07-22 15:06:54.381410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.976 [2024-07-22 15:06:54.381429] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:53:34.976 [2024-07-22 15:06:54.381434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31160 len:8 PRP1 0x0 PRP2 0x0 00:53:34.976 [2024-07-22 15:06:54.381439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.976 [2024-07-22 15:06:54.381448] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:53:34.976 [2024-07-22 15:06:54.381454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:53:34.976 [2024-07-22 15:06:54.381459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50024 len:8 PRP1 0x0 PRP2 0x0 00:53:34.976 [2024-07-22 15:06:54.381464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.976 [2024-07-22 15:06:54.381469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:53:34.976 [2024-07-22 15:06:54.381473] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:53:34.976 [2024-07-22 15:06:54.381477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39392 len:8 PRP1 0x0 PRP2 0x0 00:53:34.976 [2024-07-22 15:06:54.381482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.976 [2024-07-22 15:06:54.381487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:53:34.976 [2024-07-22 15:06:54.381491] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:53:34.976 [2024-07-22 15:06:54.381495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79048 len:8 PRP1 0x0 PRP2 0x0 00:53:34.977 [2024-07-22 15:06:54.381500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.977 [2024-07-22 15:06:54.381505] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:53:34.977 [2024-07-22 15:06:54.381509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:53:34.977 [2024-07-22 15:06:54.381513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56960 len:8 PRP1 0x0 PRP2 0x0 00:53:34.977 [2024-07-22 15:06:54.381518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.977 [2024-07-22 15:06:54.381523] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:53:34.977 [2024-07-22 15:06:54.381527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:53:34.977 [2024-07-22 15:06:54.381531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123224 len:8 PRP1 0x0 PRP2 0x0 00:53:34.977 [2024-07-22 15:06:54.381536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.977 [2024-07-22 15:06:54.381541] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:53:34.977 [2024-07-22 15:06:54.381545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:53:34.977 [2024-07-22 15:06:54.381549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101584 len:8 PRP1 0x0 PRP2 0x0 00:53:34.977 [2024-07-22 15:06:54.381554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.977 [2024-07-22 15:06:54.381559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:53:34.977 [2024-07-22 15:06:54.381563] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:53:34.977 [2024-07-22 15:06:54.381569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20528 len:8 PRP1 0x0 PRP2 0x0 00:53:34.977 [2024-07-22 15:06:54.381574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.977 [2024-07-22 15:06:54.381579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:53:34.977 [2024-07-22 15:06:54.381584] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:53:34.977 [2024-07-22 15:06:54.381588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57272 len:8 PRP1 0x0 PRP2 0x0 00:53:34.977 [2024-07-22 15:06:54.381593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.977 [2024-07-22 15:06:54.381598] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:53:34.977 [2024-07-22 15:06:54.381603] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:53:34.977 [2024-07-22 15:06:54.381608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45336 len:8 PRP1 0x0 PRP2 0x0 00:53:34.977 [2024-07-22 15:06:54.381612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.977 [2024-07-22 15:06:54.381617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:53:34.977 [2024-07-22 15:06:54.381621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:53:34.977 [2024-07-22 15:06:54.381626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78896 len:8 PRP1 0x0 PRP2 0x0 00:53:34.977 [2024-07-22 15:06:54.381630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.977 [2024-07-22 15:06:54.381635] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:53:34.977 [2024-07-22 15:06:54.381639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:53:34.977 [2024-07-22 15:06:54.381643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107864 len:8 PRP1 0x0 PRP2 0x0 00:53:34.977 [2024-07-22 15:06:54.381648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.977 [2024-07-22 15:06:54.381653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:53:34.977 [2024-07-22 15:06:54.381657] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:53:34.977 [2024-07-22 15:06:54.381661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11728 len:8 PRP1 0x0 PRP2 0x0 00:53:34.977 [2024-07-22 15:06:54.381673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.977 [2024-07-22 15:06:54.381678] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:53:34.977 [2024-07-22 15:06:54.381682] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:53:34.977 [2024-07-22 15:06:54.381687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49120 len:8 PRP1 0x0 PRP2 0x0 00:53:34.977 [2024-07-22 15:06:54.381691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.977 [2024-07-22 15:06:54.381697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:53:34.977 [2024-07-22 15:06:54.381701] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:53:34.977 [2024-07-22 15:06:54.381705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29488 len:8 PRP1 0x0 PRP2 0x0 00:53:34.977 [2024-07-22 15:06:54.381710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.977 [2024-07-22 15:06:54.381716] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:53:34.977 [2024-07-22 15:06:54.381719] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:53:34.977 [2024-07-22 15:06:54.381725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56888 len:8 PRP1 0x0 PRP2 0x0 00:53:34.977 [2024-07-22 15:06:54.381730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.977 [2024-07-22 15:06:54.381736] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:53:34.977 [2024-07-22 15:06:54.381739] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:53:34.977 [2024-07-22 15:06:54.381743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90784 len:8 PRP1 0x0 PRP2 0x0 00:53:34.977 [2024-07-22 15:06:54.381748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.977 [2024-07-22 15:06:54.381753] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:53:34.977 [2024-07-22 15:06:54.381758] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:53:34.977 [2024-07-22 15:06:54.381763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88200 len:8 PRP1 0x0 PRP2 0x0 00:53:34.978 [2024-07-22 15:06:54.381768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.978 [2024-07-22 15:06:54.381773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:53:34.978 [2024-07-22 15:06:54.381777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:53:34.978 [2024-07-22 15:06:54.381782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24176 len:8 PRP1 0x0 PRP2 0x0 00:53:34.978 [2024-07-22 15:06:54.381787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.978 [2024-07-22 15:06:54.381792] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:53:34.978 [2024-07-22 15:06:54.381796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:53:34.978 [2024-07-22 15:06:54.381800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43616 len:8 PRP1 0x0 PRP2 0x0 00:53:34.978 [2024-07-22 15:06:54.381805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.978 [2024-07-22 15:06:54.381810] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:53:34.978 [2024-07-22 15:06:54.381814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:53:34.978 [2024-07-22 15:06:54.381818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78224 len:8 PRP1 0x0 PRP2 0x0 00:53:34.978 [2024-07-22 15:06:54.381822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.978 [2024-07-22 15:06:54.381828] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:53:34.978 [2024-07-22 15:06:54.381832] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:53:34.978 [2024-07-22 15:06:54.381836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119608 len:8 PRP1 0x0 PRP2 0x0 00:53:34.978 [2024-07-22 15:06:54.381841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.978 [2024-07-22 15:06:54.381846] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:53:34.978 [2024-07-22 15:06:54.381850] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:53:34.978 [2024-07-22 15:06:54.381854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109584 len:8 PRP1 0x0 PRP2 0x0 00:53:34.978 [2024-07-22 15:06:54.381860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.978 [2024-07-22 15:06:54.381865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:53:34.978 [2024-07-22 15:06:54.381869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:53:34.978 15:06:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 114583 00:53:34.978 [2024-07-22 15:06:54.405730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69328 len:8 PRP1 0x0 PRP2 0x0 00:53:34.978 [2024-07-22 15:06:54.405769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.978 [2024-07-22 15:06:54.405833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:53:34.978 [2024-07-22 15:06:54.405849] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:53:34.978 [2024-07-22 15:06:54.405863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22416 len:8 PRP1 0x0 PRP2 0x0 00:53:34.978 [2024-07-22 15:06:54.405874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.978 [2024-07-22 15:06:54.405884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:53:34.978 [2024-07-22 15:06:54.405899] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:53:34.978 [2024-07-22 15:06:54.405908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83400 len:8 PRP1 0x0 PRP2 0x0 00:53:34.978 [2024-07-22 15:06:54.405919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.978 [2024-07-22 15:06:54.405944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:53:34.978 [2024-07-22 15:06:54.405972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:53:34.978 [2024-07-22 15:06:54.405981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66488 len:8 PRP1 0x0 PRP2 0x0 00:53:34.978 [2024-07-22 15:06:54.405991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.978 [2024-07-22 15:06:54.406009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:53:34.978 [2024-07-22 15:06:54.406018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:53:34.978 [2024-07-22 15:06:54.406028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22040 len:8 PRP1 0x0 PRP2 0x0 00:53:34.978 [2024-07-22 15:06:54.406052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.978 [2024-07-22 15:06:54.406076] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:53:34.978 [2024-07-22 15:06:54.406086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:53:34.978 [2024-07-22 15:06:54.406095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19184 len:8 PRP1 0x0 PRP2 0x0 00:53:34.978 [2024-07-22 15:06:54.406120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.978 [2024-07-22 15:06:54.406137] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:53:34.978 [2024-07-22 15:06:54.406144] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:53:34.978 [2024-07-22 15:06:54.406155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70528 len:8 PRP1 0x0 PRP2 0x0 00:53:34.978 [2024-07-22 15:06:54.406165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.978 [2024-07-22 15:06:54.406188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:53:34.978 [2024-07-22 15:06:54.406212] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:53:34.978 [2024-07-22 15:06:54.406224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77448 len:8 PRP1 0x0 PRP2 0x0 00:53:34.978 [2024-07-22 15:06:54.406234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.978 [2024-07-22 15:06:54.406244] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:53:34.978 [2024-07-22 15:06:54.406251] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:53:34.978 [2024-07-22 15:06:54.406260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81952 len:8 PRP1 0x0 PRP2 0x0 00:53:34.978 [2024-07-22 15:06:54.406269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.978 [2024-07-22 15:06:54.406280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:53:34.978 [2024-07-22 15:06:54.406287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:53:34.978 [2024-07-22 15:06:54.406295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43928 len:8 PRP1 0x0 PRP2 0x0 00:53:34.978 [2024-07-22 15:06:54.406305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.978 [2024-07-22 15:06:54.406316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:53:34.978 [2024-07-22 15:06:54.406324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:53:34.978 [2024-07-22 15:06:54.406332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56592 len:8 PRP1 0x0 PRP2 0x0 00:53:34.978 [2024-07-22 15:06:54.406341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.978 [2024-07-22 15:06:54.406352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:53:34.978 [2024-07-22 15:06:54.406359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:53:34.978 [2024-07-22 15:06:54.406367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75568 len:8 PRP1 0x0 PRP2 0x0 00:53:34.978 [2024-07-22 15:06:54.406377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.978 [2024-07-22 15:06:54.406438] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16aaec0 was disconnected and freed. reset controller. 00:53:34.978 [2024-07-22 15:06:54.406569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:53:34.978 [2024-07-22 15:06:54.406584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.978 [2024-07-22 15:06:54.406597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:53:34.978 [2024-07-22 15:06:54.406607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.979 [2024-07-22 15:06:54.406618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:53:34.979 [2024-07-22 15:06:54.406627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.979 [2024-07-22 15:06:54.406638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:53:34.979 [2024-07-22 15:06:54.406647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:53:34.979 [2024-07-22 15:06:54.406657] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1697ac0 is same with the state(5) to be set 00:53:34.979 [2024-07-22 15:06:54.407031] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:53:34.979 [2024-07-22 15:06:54.407055] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1697ac0 (9): Bad file descriptor 00:53:34.979 [2024-07-22 15:06:54.407179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:34.979 [2024-07-22 15:06:54.407203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1697ac0 with addr=10.0.0.2, port=4420 00:53:34.979 [2024-07-22 15:06:54.407214] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1697ac0 is same with the state(5) to be set 00:53:34.979 [2024-07-22 15:06:54.407233] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1697ac0 (9): Bad file descriptor 00:53:34.979 [2024-07-22 15:06:54.407250] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:53:34.979 [2024-07-22 15:06:54.407260] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:53:34.979 [2024-07-22 15:06:54.407271] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:53:34.979 [2024-07-22 15:06:54.407296] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:53:34.979 [2024-07-22 15:06:54.407306] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:53:36.883 [2024-07-22 15:06:56.403589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:36.883 [2024-07-22 15:06:56.403634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1697ac0 with addr=10.0.0.2, port=4420 00:53:36.883 [2024-07-22 15:06:56.403645] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1697ac0 is same with the state(5) to be set 00:53:36.883 [2024-07-22 15:06:56.403661] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1697ac0 (9): Bad file descriptor 00:53:36.884 [2024-07-22 15:06:56.403679] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:53:36.884 [2024-07-22 15:06:56.403685] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:53:36.884 [2024-07-22 15:06:56.403692] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:53:36.884 [2024-07-22 15:06:56.403712] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:53:36.884 [2024-07-22 15:06:56.403719] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:53:38.799 [2024-07-22 15:06:58.400127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:53:38.799 [2024-07-22 15:06:58.400186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1697ac0 with addr=10.0.0.2, port=4420 00:53:38.799 [2024-07-22 15:06:58.400196] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1697ac0 is same with the state(5) to be set 00:53:38.799 [2024-07-22 15:06:58.400214] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1697ac0 (9): Bad file descriptor 00:53:38.799 [2024-07-22 15:06:58.400225] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:53:38.799 [2024-07-22 15:06:58.400231] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:53:38.799 [2024-07-22 15:06:58.400239] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:53:38.799 [2024-07-22 15:06:58.400260] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:53:38.799 [2024-07-22 15:06:58.400268] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:53:41.335 [2024-07-22 15:07:00.396526] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:53:41.335 [2024-07-22 15:07:00.396588] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:53:41.335 [2024-07-22 15:07:00.396612] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:53:41.335 [2024-07-22 15:07:00.396620] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:53:41.335 [2024-07-22 15:07:00.396642] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:53:41.903 00:53:41.903 Latency(us) 00:53:41.903 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:53:41.903 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:53:41.903 NVMe0n1 : 8.14 3169.67 12.38 15.73 0.00 40126.61 1810.11 7033243.39 00:53:41.903 =================================================================================================================== 00:53:41.903 Total : 3169.67 12.38 15.73 0.00 40126.61 1810.11 7033243.39 00:53:41.903 0 00:53:41.903 15:07:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:53:41.903 Attaching 5 probes... 00:53:41.903 1110.122019: reset bdev controller NVMe0 00:53:41.903 1110.193189: reconnect bdev controller NVMe0 00:53:41.903 3106.598812: reconnect delay bdev controller NVMe0 00:53:41.903 3106.613830: reconnect bdev controller NVMe0 00:53:41.903 5103.091706: reconnect delay bdev controller NVMe0 00:53:41.903 5103.110420: reconnect bdev controller NVMe0 00:53:41.903 7099.606582: reconnect delay bdev controller NVMe0 00:53:41.903 7099.624722: reconnect bdev controller NVMe0 00:53:41.903 15:07:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:53:41.903 15:07:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:53:41.903 15:07:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 114533 00:53:41.903 15:07:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:53:41.903 15:07:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 114506 00:53:41.903 15:07:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@946 -- # '[' -z 114506 ']' 00:53:41.903 15:07:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@950 -- # kill -0 114506 00:53:41.903 15:07:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # uname 00:53:41.903 15:07:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:53:41.903 15:07:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 114506 00:53:41.903 killing process with pid 114506 00:53:41.903 Received shutdown signal, test time was about 8.218875 seconds 00:53:41.903 00:53:41.903 Latency(us) 00:53:41.903 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:53:41.903 =================================================================================================================== 00:53:41.903 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:53:41.903 15:07:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:53:41.903 15:07:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:53:41.903 15:07:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 114506' 00:53:41.903 15:07:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # kill 114506 00:53:41.903 15:07:01 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@970 -- # wait 114506 00:53:42.162 15:07:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:53:42.422 15:07:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:53:42.422 15:07:01 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:53:42.422 15:07:01 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:53:42.422 15:07:01 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:53:42.422 15:07:01 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:53:42.422 15:07:01 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:53:42.422 15:07:01 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:53:42.422 15:07:01 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:53:42.422 rmmod nvme_tcp 00:53:42.422 rmmod nvme_fabrics 00:53:42.422 rmmod nvme_keyring 00:53:42.422 15:07:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:53:42.422 15:07:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:53:42.422 15:07:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:53:42.422 15:07:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 113934 ']' 00:53:42.422 15:07:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 113934 00:53:42.422 15:07:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@946 -- # '[' -z 113934 ']' 00:53:42.422 15:07:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@950 -- # kill -0 113934 00:53:42.422 15:07:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # uname 00:53:42.422 15:07:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:53:42.422 15:07:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 113934 00:53:42.422 killing process with pid 113934 00:53:42.422 15:07:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:53:42.422 15:07:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:53:42.422 15:07:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 113934' 00:53:42.422 15:07:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # kill 113934 00:53:42.422 15:07:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@970 -- # wait 113934 00:53:42.682 15:07:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:53:42.682 15:07:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:53:42.682 15:07:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:53:42.682 15:07:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:53:42.682 15:07:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:53:42.682 15:07:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:53:42.682 15:07:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:53:42.682 15:07:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:53:42.682 15:07:02 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:53:42.682 00:53:42.682 real 0m45.324s 00:53:42.682 user 2m13.234s 00:53:42.682 sys 0m4.054s 00:53:42.682 15:07:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1122 -- # xtrace_disable 00:53:42.682 15:07:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:53:42.682 ************************************ 00:53:42.682 END TEST nvmf_timeout 00:53:42.683 ************************************ 00:53:42.942 15:07:02 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ virt == phy ]] 00:53:42.942 15:07:02 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:53:42.942 15:07:02 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:53:42.942 15:07:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:53:42.942 15:07:02 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:53:42.942 00:53:42.942 real 20m40.545s 00:53:42.942 user 62m58.731s 00:53:42.942 sys 3m50.045s 00:53:42.942 15:07:02 nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:53:42.942 15:07:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:53:42.942 ************************************ 00:53:42.942 END TEST nvmf_tcp 00:53:42.942 ************************************ 00:53:42.942 15:07:02 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:53:42.942 15:07:02 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:53:42.942 15:07:02 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:53:42.942 15:07:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:53:42.942 15:07:02 -- common/autotest_common.sh@10 -- # set +x 00:53:42.942 ************************************ 00:53:42.942 START TEST spdkcli_nvmf_tcp 00:53:42.942 ************************************ 00:53:42.942 15:07:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:53:42.942 * Looking for test storage... 00:53:43.202 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:53:43.202 15:07:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:53:43.202 15:07:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:53:43.202 15:07:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:53:43.202 15:07:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:53:43.202 15:07:02 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:53:43.202 15:07:02 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:53:43.202 15:07:02 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:53:43.202 15:07:02 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:53:43.202 15:07:02 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:53:43.202 15:07:02 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:53:43.202 15:07:02 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:53:43.202 15:07:02 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:53:43.202 15:07:02 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:53:43.202 15:07:02 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:53:43.202 15:07:02 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:53:43.202 15:07:02 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:53:43.202 15:07:02 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:53:43.202 15:07:02 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:53:43.202 15:07:02 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:53:43.202 15:07:02 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:53:43.202 15:07:02 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:53:43.202 15:07:02 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:53:43.202 15:07:02 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:53:43.202 15:07:02 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:53:43.202 15:07:02 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:53:43.202 15:07:02 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:43.202 15:07:02 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:43.202 15:07:02 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:43.202 15:07:02 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:53:43.202 15:07:02 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:53:43.203 15:07:02 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:53:43.203 15:07:02 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:53:43.203 15:07:02 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:53:43.203 15:07:02 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:53:43.203 15:07:02 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:53:43.203 15:07:02 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:53:43.203 15:07:02 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:53:43.203 15:07:02 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:53:43.203 15:07:02 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:53:43.203 15:07:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:53:43.203 15:07:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:53:43.203 15:07:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:53:43.203 15:07:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:53:43.203 15:07:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:53:43.203 15:07:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:53:43.203 15:07:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:53:43.203 15:07:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=114796 00:53:43.203 15:07:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 114796 00:53:43.203 15:07:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:53:43.203 15:07:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@827 -- # '[' -z 114796 ']' 00:53:43.203 15:07:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:53:43.203 15:07:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:53:43.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:53:43.203 15:07:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:53:43.203 15:07:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:53:43.203 15:07:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:53:43.203 [2024-07-22 15:07:02.677845] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:53:43.203 [2024-07-22 15:07:02.677911] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114796 ] 00:53:43.203 [2024-07-22 15:07:02.816149] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:53:43.465 [2024-07-22 15:07:02.870250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:53:43.465 [2024-07-22 15:07:02.870255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:53:44.035 15:07:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:53:44.035 15:07:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # return 0 00:53:44.035 15:07:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:53:44.035 15:07:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:53:44.035 15:07:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:53:44.035 15:07:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:53:44.035 15:07:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:53:44.035 15:07:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:53:44.035 15:07:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:53:44.035 15:07:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:53:44.035 15:07:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:53:44.035 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:53:44.035 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:53:44.035 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:53:44.035 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:53:44.035 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:53:44.035 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:53:44.035 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:53:44.035 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:53:44.035 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:53:44.035 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:53:44.035 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:53:44.035 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:53:44.035 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:53:44.035 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:53:44.035 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:53:44.035 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:53:44.035 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:53:44.035 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:53:44.035 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:53:44.035 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:53:44.035 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:53:44.035 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:53:44.035 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:53:44.035 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:53:44.035 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:53:44.035 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:53:44.035 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:53:44.035 ' 00:53:47.325 [2024-07-22 15:07:06.205035] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:53:48.260 [2024-07-22 15:07:07.523492] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:53:50.794 [2024-07-22 15:07:09.920462] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:53:52.698 [2024-07-22 15:07:12.037744] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:53:54.081 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:53:54.081 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:53:54.081 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:53:54.081 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:53:54.081 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:53:54.081 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:53:54.081 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:53:54.081 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:53:54.081 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:53:54.081 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:53:54.081 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:53:54.081 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:53:54.081 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:53:54.081 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:53:54.081 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:53:54.081 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:53:54.081 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:53:54.081 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:53:54.081 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:53:54.081 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:53:54.081 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:53:54.081 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:53:54.081 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:53:54.081 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:53:54.081 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:53:54.081 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:53:54.081 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:53:54.081 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:53:54.337 15:07:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:53:54.337 15:07:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:53:54.337 15:07:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:53:54.337 15:07:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:53:54.337 15:07:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:53:54.337 15:07:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:53:54.337 15:07:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:53:54.337 15:07:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:53:54.595 15:07:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:53:54.853 15:07:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:53:54.853 15:07:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:53:54.853 15:07:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:53:54.853 15:07:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:53:54.853 15:07:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:53:54.853 15:07:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:53:54.853 15:07:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:53:54.853 15:07:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:53:54.853 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:53:54.853 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:53:54.853 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:53:54.853 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:53:54.853 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:53:54.853 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:53:54.853 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:53:54.853 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:53:54.853 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:53:54.853 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:53:54.853 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:53:54.853 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:53:54.853 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:53:54.853 ' 00:54:01.443 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:54:01.443 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:54:01.443 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:54:01.443 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:54:01.443 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:54:01.443 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:54:01.443 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:54:01.443 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:54:01.443 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:54:01.443 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:54:01.443 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:54:01.443 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:54:01.443 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:54:01.443 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:54:01.443 15:07:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:54:01.443 15:07:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:54:01.443 15:07:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:54:01.443 15:07:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 114796 00:54:01.443 15:07:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 114796 ']' 00:54:01.443 15:07:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 114796 00:54:01.443 15:07:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # uname 00:54:01.443 15:07:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:54:01.443 15:07:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 114796 00:54:01.443 15:07:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:54:01.443 killing process with pid 114796 00:54:01.443 15:07:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:54:01.443 15:07:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 114796' 00:54:01.443 15:07:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # kill 114796 00:54:01.443 15:07:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # wait 114796 00:54:01.443 15:07:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:54:01.443 15:07:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:54:01.443 15:07:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 114796 ']' 00:54:01.443 15:07:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 114796 00:54:01.443 15:07:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 114796 ']' 00:54:01.443 15:07:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 114796 00:54:01.443 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (114796) - No such process 00:54:01.443 Process with pid 114796 is not found 00:54:01.443 15:07:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # echo 'Process with pid 114796 is not found' 00:54:01.443 15:07:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:54:01.443 15:07:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:54:01.443 15:07:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:54:01.443 00:54:01.443 real 0m17.707s 00:54:01.443 user 0m38.732s 00:54:01.443 sys 0m0.953s 00:54:01.443 15:07:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:54:01.443 15:07:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:54:01.443 ************************************ 00:54:01.443 END TEST spdkcli_nvmf_tcp 00:54:01.443 ************************************ 00:54:01.444 15:07:20 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:54:01.444 15:07:20 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:54:01.444 15:07:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:54:01.444 15:07:20 -- common/autotest_common.sh@10 -- # set +x 00:54:01.444 ************************************ 00:54:01.444 START TEST nvmf_identify_passthru 00:54:01.444 ************************************ 00:54:01.444 15:07:20 nvmf_identify_passthru -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:54:01.444 * Looking for test storage... 00:54:01.444 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:54:01.444 15:07:20 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:54:01.444 15:07:20 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:54:01.444 15:07:20 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:54:01.444 15:07:20 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:54:01.444 15:07:20 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:01.444 15:07:20 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:01.444 15:07:20 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:01.444 15:07:20 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:54:01.444 15:07:20 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:54:01.444 15:07:20 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:54:01.444 15:07:20 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:54:01.444 15:07:20 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:54:01.444 15:07:20 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:54:01.444 15:07:20 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:01.444 15:07:20 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:01.444 15:07:20 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:01.444 15:07:20 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:54:01.444 15:07:20 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:01.444 15:07:20 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:54:01.444 15:07:20 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:54:01.444 15:07:20 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@432 -- # nvmf_veth_init 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:54:01.444 Cannot find device "nvmf_tgt_br" 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@155 -- # true 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:54:01.444 Cannot find device "nvmf_tgt_br2" 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@156 -- # true 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:54:01.444 Cannot find device "nvmf_tgt_br" 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@158 -- # true 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:54:01.444 Cannot find device "nvmf_tgt_br2" 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@159 -- # true 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:54:01.444 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:54:01.445 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:54:01.445 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:54:01.445 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 00:54:01.445 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:54:01.445 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:54:01.445 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 00:54:01.445 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:54:01.445 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:54:01.445 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:54:01.445 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:54:01.445 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:54:01.445 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:54:01.445 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:54:01.445 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:54:01.445 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:54:01.445 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:54:01.445 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:54:01.445 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:54:01.445 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:54:01.445 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:54:01.445 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:54:01.445 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:54:01.445 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:54:01.445 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:54:01.445 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:54:01.445 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:54:01.445 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:54:01.445 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:54:01.445 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:54:01.445 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:54:01.445 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:54:01.445 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:54:01.445 00:54:01.445 --- 10.0.0.2 ping statistics --- 00:54:01.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:01.445 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:54:01.445 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:54:01.445 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:54:01.445 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:54:01.445 00:54:01.445 --- 10.0.0.3 ping statistics --- 00:54:01.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:01.445 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:54:01.445 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:54:01.445 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:54:01.445 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:54:01.445 00:54:01.445 --- 10.0.0.1 ping statistics --- 00:54:01.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:01.445 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:54:01.445 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:54:01.445 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@433 -- # return 0 00:54:01.445 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:54:01.445 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:54:01.445 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:54:01.445 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:54:01.445 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:54:01.445 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:54:01.445 15:07:20 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:54:01.445 15:07:20 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:54:01.445 15:07:20 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:54:01.445 15:07:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:54:01.445 15:07:20 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:54:01.445 15:07:20 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # bdfs=() 00:54:01.445 15:07:20 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # local bdfs 00:54:01.445 15:07:20 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:54:01.445 15:07:20 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:54:01.445 15:07:20 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:54:01.445 15:07:20 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:54:01.445 15:07:20 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:54:01.445 15:07:20 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:54:01.445 15:07:20 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:54:01.445 15:07:20 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # (( 2 == 0 )) 00:54:01.445 15:07:20 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:54:01.445 15:07:20 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # echo 0000:00:10.0 00:54:01.445 15:07:20 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:54:01.445 15:07:20 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:54:01.445 15:07:20 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:54:01.445 15:07:20 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:54:01.445 15:07:20 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:54:01.445 15:07:21 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:54:01.705 15:07:21 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:54:01.705 15:07:21 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:54:01.705 15:07:21 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:54:01.705 15:07:21 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:54:01.705 15:07:21 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:54:01.705 15:07:21 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:54:01.705 15:07:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:54:01.705 15:07:21 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:54:01.705 15:07:21 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:54:01.705 15:07:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:54:01.705 15:07:21 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=115293 00:54:01.705 15:07:21 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:54:01.705 15:07:21 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:54:01.705 15:07:21 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 115293 00:54:01.705 15:07:21 nvmf_identify_passthru -- common/autotest_common.sh@827 -- # '[' -z 115293 ']' 00:54:01.705 15:07:21 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:54:01.705 15:07:21 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local max_retries=100 00:54:01.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:54:01.705 15:07:21 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:54:01.705 15:07:21 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # xtrace_disable 00:54:01.705 15:07:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:54:01.965 [2024-07-22 15:07:21.373889] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:54:01.965 [2024-07-22 15:07:21.373959] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:54:01.965 [2024-07-22 15:07:21.499692] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:54:01.965 [2024-07-22 15:07:21.551747] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:54:01.965 [2024-07-22 15:07:21.551798] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:54:01.965 [2024-07-22 15:07:21.551805] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:54:01.965 [2024-07-22 15:07:21.551811] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:54:01.965 [2024-07-22 15:07:21.551815] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:54:01.965 [2024-07-22 15:07:21.551927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:54:01.965 [2024-07-22 15:07:21.552047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:54:01.965 [2024-07-22 15:07:21.552060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:54:01.965 [2024-07-22 15:07:21.551960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:54:02.903 15:07:22 nvmf_identify_passthru -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:54:02.903 15:07:22 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # return 0 00:54:02.903 15:07:22 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:54:02.904 15:07:22 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:02.904 15:07:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:54:02.904 15:07:22 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:02.904 15:07:22 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:54:02.904 15:07:22 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:02.904 15:07:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:54:02.904 [2024-07-22 15:07:22.337953] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:54:02.904 15:07:22 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:02.904 15:07:22 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:54:02.904 15:07:22 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:02.904 15:07:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:54:02.904 [2024-07-22 15:07:22.351189] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:54:02.904 15:07:22 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:02.904 15:07:22 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:54:02.904 15:07:22 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:54:02.904 15:07:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:54:02.904 15:07:22 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:54:02.904 15:07:22 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:02.904 15:07:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:54:02.904 Nvme0n1 00:54:02.904 15:07:22 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:02.904 15:07:22 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:54:02.904 15:07:22 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:02.904 15:07:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:54:02.904 15:07:22 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:02.904 15:07:22 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:54:02.904 15:07:22 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:02.904 15:07:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:54:02.904 15:07:22 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:02.904 15:07:22 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:54:02.904 15:07:22 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:02.904 15:07:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:54:02.904 [2024-07-22 15:07:22.512938] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:54:02.904 15:07:22 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:02.904 15:07:22 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:54:02.904 15:07:22 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:02.904 15:07:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:54:02.904 [ 00:54:02.904 { 00:54:02.904 "allow_any_host": true, 00:54:02.904 "hosts": [], 00:54:02.904 "listen_addresses": [], 00:54:02.904 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:54:02.904 "subtype": "Discovery" 00:54:02.904 }, 00:54:02.904 { 00:54:02.904 "allow_any_host": true, 00:54:02.904 "hosts": [], 00:54:02.904 "listen_addresses": [ 00:54:02.904 { 00:54:02.904 "adrfam": "IPv4", 00:54:02.904 "traddr": "10.0.0.2", 00:54:02.904 "trsvcid": "4420", 00:54:02.904 "trtype": "TCP" 00:54:02.904 } 00:54:02.904 ], 00:54:02.904 "max_cntlid": 65519, 00:54:02.904 "max_namespaces": 1, 00:54:02.904 "min_cntlid": 1, 00:54:02.904 "model_number": "SPDK bdev Controller", 00:54:02.904 "namespaces": [ 00:54:02.904 { 00:54:02.904 "bdev_name": "Nvme0n1", 00:54:02.904 "name": "Nvme0n1", 00:54:03.164 "nguid": "FCEBF8B0614C495596EEBA4D33D7F707", 00:54:03.164 "nsid": 1, 00:54:03.164 "uuid": "fcebf8b0-614c-4955-96ee-ba4d33d7f707" 00:54:03.164 } 00:54:03.164 ], 00:54:03.164 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:54:03.164 "serial_number": "SPDK00000000000001", 00:54:03.164 "subtype": "NVMe" 00:54:03.164 } 00:54:03.164 ] 00:54:03.164 15:07:22 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:03.164 15:07:22 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:54:03.164 15:07:22 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:54:03.164 15:07:22 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:54:03.164 15:07:22 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:54:03.164 15:07:22 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:54:03.164 15:07:22 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:54:03.164 15:07:22 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:54:03.423 15:07:22 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:54:03.423 15:07:22 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:54:03.423 15:07:22 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:54:03.423 15:07:22 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:54:03.423 15:07:22 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:03.424 15:07:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:54:03.424 15:07:22 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:03.424 15:07:22 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:54:03.424 15:07:23 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:54:03.424 15:07:23 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:54:03.424 15:07:23 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:54:03.769 15:07:23 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:54:03.769 15:07:23 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:54:03.769 15:07:23 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:54:03.769 15:07:23 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:54:03.769 rmmod nvme_tcp 00:54:03.769 rmmod nvme_fabrics 00:54:03.769 rmmod nvme_keyring 00:54:03.769 15:07:23 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:54:03.769 15:07:23 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:54:03.769 15:07:23 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:54:03.769 15:07:23 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 115293 ']' 00:54:03.769 15:07:23 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 115293 00:54:03.769 15:07:23 nvmf_identify_passthru -- common/autotest_common.sh@946 -- # '[' -z 115293 ']' 00:54:03.769 15:07:23 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # kill -0 115293 00:54:03.769 15:07:23 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # uname 00:54:03.769 15:07:23 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:54:03.769 15:07:23 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 115293 00:54:03.769 15:07:23 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:54:03.769 15:07:23 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:54:03.769 15:07:23 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # echo 'killing process with pid 115293' 00:54:03.769 killing process with pid 115293 00:54:03.769 15:07:23 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # kill 115293 00:54:03.769 15:07:23 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # wait 115293 00:54:03.769 15:07:23 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:54:03.769 15:07:23 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:54:03.769 15:07:23 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:54:03.769 15:07:23 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:54:03.769 15:07:23 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:54:03.769 15:07:23 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:54:03.769 15:07:23 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:54:03.769 15:07:23 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:54:04.031 15:07:23 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:54:04.031 00:54:04.031 real 0m3.184s 00:54:04.031 user 0m7.655s 00:54:04.031 sys 0m0.926s 00:54:04.031 15:07:23 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # xtrace_disable 00:54:04.031 15:07:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:54:04.031 ************************************ 00:54:04.031 END TEST nvmf_identify_passthru 00:54:04.031 ************************************ 00:54:04.031 15:07:23 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:54:04.031 15:07:23 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:54:04.031 15:07:23 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:54:04.031 15:07:23 -- common/autotest_common.sh@10 -- # set +x 00:54:04.031 ************************************ 00:54:04.031 START TEST nvmf_dif 00:54:04.031 ************************************ 00:54:04.031 15:07:23 nvmf_dif -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:54:04.031 * Looking for test storage... 00:54:04.031 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:54:04.031 15:07:23 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:54:04.031 15:07:23 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:54:04.031 15:07:23 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:54:04.031 15:07:23 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:54:04.031 15:07:23 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:54:04.031 15:07:23 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:54:04.031 15:07:23 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:54:04.031 15:07:23 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:54:04.031 15:07:23 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:54:04.031 15:07:23 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:54:04.031 15:07:23 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:54:04.031 15:07:23 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:54:04.031 15:07:23 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:54:04.031 15:07:23 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:54:04.031 15:07:23 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:54:04.031 15:07:23 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:54:04.031 15:07:23 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:54:04.031 15:07:23 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:54:04.031 15:07:23 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:54:04.031 15:07:23 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:54:04.031 15:07:23 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:54:04.031 15:07:23 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:54:04.031 15:07:23 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:04.031 15:07:23 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:04.031 15:07:23 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:04.031 15:07:23 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:54:04.031 15:07:23 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:54:04.031 15:07:23 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:54:04.031 15:07:23 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:54:04.031 15:07:23 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:54:04.031 15:07:23 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:54:04.031 15:07:23 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:54:04.031 15:07:23 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:54:04.031 15:07:23 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:54:04.031 15:07:23 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:54:04.031 15:07:23 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:54:04.031 15:07:23 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:54:04.031 15:07:23 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:54:04.031 15:07:23 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:54:04.031 15:07:23 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:54:04.031 15:07:23 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:54:04.031 15:07:23 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:54:04.031 15:07:23 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:54:04.031 15:07:23 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:54:04.031 15:07:23 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:54:04.031 15:07:23 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:54:04.031 15:07:23 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:54:04.031 15:07:23 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:54:04.031 15:07:23 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:54:04.031 15:07:23 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:54:04.031 15:07:23 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:54:04.031 15:07:23 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:54:04.031 15:07:23 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:54:04.031 15:07:23 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:54:04.031 15:07:23 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:54:04.031 15:07:23 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:54:04.031 15:07:23 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:54:04.031 15:07:23 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:54:04.031 15:07:23 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:54:04.031 15:07:23 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:54:04.031 15:07:23 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:54:04.031 15:07:23 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:54:04.032 15:07:23 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:54:04.032 15:07:23 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:54:04.032 15:07:23 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:54:04.032 15:07:23 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:54:04.032 15:07:23 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:54:04.032 15:07:23 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:54:04.291 15:07:23 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:54:04.291 Cannot find device "nvmf_tgt_br" 00:54:04.291 15:07:23 nvmf_dif -- nvmf/common.sh@155 -- # true 00:54:04.291 15:07:23 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:54:04.291 Cannot find device "nvmf_tgt_br2" 00:54:04.291 15:07:23 nvmf_dif -- nvmf/common.sh@156 -- # true 00:54:04.291 15:07:23 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:54:04.291 15:07:23 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:54:04.291 Cannot find device "nvmf_tgt_br" 00:54:04.291 15:07:23 nvmf_dif -- nvmf/common.sh@158 -- # true 00:54:04.291 15:07:23 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:54:04.291 Cannot find device "nvmf_tgt_br2" 00:54:04.291 15:07:23 nvmf_dif -- nvmf/common.sh@159 -- # true 00:54:04.291 15:07:23 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:54:04.291 15:07:23 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:54:04.291 15:07:23 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:54:04.291 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:54:04.291 15:07:23 nvmf_dif -- nvmf/common.sh@162 -- # true 00:54:04.291 15:07:23 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:54:04.291 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:54:04.291 15:07:23 nvmf_dif -- nvmf/common.sh@163 -- # true 00:54:04.291 15:07:23 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:54:04.291 15:07:23 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:54:04.291 15:07:23 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:54:04.291 15:07:23 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:54:04.291 15:07:23 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:54:04.291 15:07:23 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:54:04.291 15:07:23 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:54:04.291 15:07:23 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:54:04.291 15:07:23 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:54:04.291 15:07:23 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:54:04.291 15:07:23 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:54:04.291 15:07:23 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:54:04.291 15:07:23 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:54:04.291 15:07:23 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:54:04.291 15:07:23 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:54:04.551 15:07:23 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:54:04.551 15:07:23 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:54:04.551 15:07:23 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:54:04.551 15:07:23 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:54:04.551 15:07:23 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:54:04.551 15:07:23 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:54:04.551 15:07:23 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:54:04.551 15:07:23 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:54:04.551 15:07:23 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:54:04.551 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:54:04.551 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:54:04.551 00:54:04.551 --- 10.0.0.2 ping statistics --- 00:54:04.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:04.551 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:54:04.551 15:07:23 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:54:04.551 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:54:04.551 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:54:04.551 00:54:04.551 --- 10.0.0.3 ping statistics --- 00:54:04.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:04.551 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:54:04.551 15:07:24 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:54:04.551 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:54:04.551 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:54:04.551 00:54:04.551 --- 10.0.0.1 ping statistics --- 00:54:04.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:54:04.551 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:54:04.551 15:07:24 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:54:04.551 15:07:24 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:54:04.551 15:07:24 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:54:04.551 15:07:24 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:54:05.122 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:54:05.122 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:54:05.122 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:54:05.122 15:07:24 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:54:05.122 15:07:24 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:54:05.122 15:07:24 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:54:05.122 15:07:24 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:54:05.122 15:07:24 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:54:05.122 15:07:24 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:54:05.122 15:07:24 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:54:05.122 15:07:24 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:54:05.122 15:07:24 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:54:05.122 15:07:24 nvmf_dif -- common/autotest_common.sh@720 -- # xtrace_disable 00:54:05.122 15:07:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:54:05.122 15:07:24 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=115641 00:54:05.122 15:07:24 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:54:05.122 15:07:24 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 115641 00:54:05.122 15:07:24 nvmf_dif -- common/autotest_common.sh@827 -- # '[' -z 115641 ']' 00:54:05.122 15:07:24 nvmf_dif -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:54:05.122 15:07:24 nvmf_dif -- common/autotest_common.sh@832 -- # local max_retries=100 00:54:05.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:54:05.122 15:07:24 nvmf_dif -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:54:05.122 15:07:24 nvmf_dif -- common/autotest_common.sh@836 -- # xtrace_disable 00:54:05.122 15:07:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:54:05.122 [2024-07-22 15:07:24.638494] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:54:05.122 [2024-07-22 15:07:24.638565] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:54:05.382 [2024-07-22 15:07:24.778997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:54:05.382 [2024-07-22 15:07:24.833235] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:54:05.382 [2024-07-22 15:07:24.833286] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:54:05.382 [2024-07-22 15:07:24.833294] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:54:05.382 [2024-07-22 15:07:24.833301] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:54:05.382 [2024-07-22 15:07:24.833307] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:54:05.382 [2024-07-22 15:07:24.833331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:54:05.951 15:07:25 nvmf_dif -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:54:05.951 15:07:25 nvmf_dif -- common/autotest_common.sh@860 -- # return 0 00:54:05.951 15:07:25 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:54:05.951 15:07:25 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:54:05.951 15:07:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:54:05.951 15:07:25 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:54:05.951 15:07:25 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:54:05.951 15:07:25 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:54:05.951 15:07:25 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:05.951 15:07:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:54:05.951 [2024-07-22 15:07:25.538830] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:54:05.951 15:07:25 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:05.951 15:07:25 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:54:05.951 15:07:25 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:54:05.951 15:07:25 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:54:05.951 15:07:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:54:05.951 ************************************ 00:54:05.951 START TEST fio_dif_1_default 00:54:05.951 ************************************ 00:54:05.951 15:07:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1121 -- # fio_dif_1 00:54:05.951 15:07:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:54:05.951 15:07:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:54:05.951 15:07:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:54:05.951 15:07:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:54:05.951 15:07:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:54:05.951 15:07:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:54:05.951 15:07:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:05.951 15:07:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:54:05.951 bdev_null0 00:54:05.951 15:07:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:05.951 15:07:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:54:05.951 15:07:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:05.951 15:07:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:54:05.951 15:07:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:05.951 15:07:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:54:05.951 15:07:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:05.951 15:07:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:54:06.210 15:07:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:06.210 15:07:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:54:06.210 15:07:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:06.210 15:07:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:54:06.210 [2024-07-22 15:07:25.594836] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:54:06.210 15:07:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:06.210 15:07:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:54:06.210 15:07:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:54:06.210 15:07:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:54:06.210 15:07:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:54:06.210 15:07:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:54:06.211 15:07:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:54:06.211 15:07:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:54:06.211 15:07:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local sanitizers 00:54:06.211 15:07:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:54:06.211 15:07:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # shift 00:54:06.211 15:07:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local asan_lib= 00:54:06.211 15:07:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:54:06.211 15:07:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:54:06.211 15:07:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:54:06.211 15:07:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:54:06.211 15:07:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:54:06.211 15:07:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:54:06.211 { 00:54:06.211 "params": { 00:54:06.211 "name": "Nvme$subsystem", 00:54:06.211 "trtype": "$TEST_TRANSPORT", 00:54:06.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:54:06.211 "adrfam": "ipv4", 00:54:06.211 "trsvcid": "$NVMF_PORT", 00:54:06.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:54:06.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:54:06.211 "hdgst": ${hdgst:-false}, 00:54:06.211 "ddgst": ${ddgst:-false} 00:54:06.211 }, 00:54:06.211 "method": "bdev_nvme_attach_controller" 00:54:06.211 } 00:54:06.211 EOF 00:54:06.211 )") 00:54:06.211 15:07:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:54:06.211 15:07:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:54:06.211 15:07:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:54:06.211 15:07:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:54:06.211 15:07:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libasan 00:54:06.211 15:07:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:54:06.211 15:07:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:54:06.211 15:07:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:54:06.211 15:07:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:54:06.211 15:07:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:54:06.211 15:07:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:54:06.211 "params": { 00:54:06.211 "name": "Nvme0", 00:54:06.211 "trtype": "tcp", 00:54:06.211 "traddr": "10.0.0.2", 00:54:06.211 "adrfam": "ipv4", 00:54:06.211 "trsvcid": "4420", 00:54:06.211 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:54:06.211 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:54:06.211 "hdgst": false, 00:54:06.211 "ddgst": false 00:54:06.211 }, 00:54:06.211 "method": "bdev_nvme_attach_controller" 00:54:06.211 }' 00:54:06.211 15:07:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:54:06.211 15:07:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:54:06.211 15:07:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:54:06.211 15:07:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:54:06.211 15:07:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:54:06.211 15:07:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:54:06.211 15:07:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:54:06.211 15:07:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:54:06.211 15:07:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:54:06.211 15:07:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:54:06.211 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:54:06.211 fio-3.35 00:54:06.211 Starting 1 thread 00:54:18.425 00:54:18.425 filename0: (groupid=0, jobs=1): err= 0: pid=115724: Mon Jul 22 15:07:36 2024 00:54:18.425 read: IOPS=1330, BW=5323KiB/s (5450kB/s)(52.1MiB/10025msec) 00:54:18.425 slat (nsec): min=5466, max=50046, avg=6493.53, stdev=2315.02 00:54:18.425 clat (usec): min=300, max=41667, avg=2987.76, stdev=9978.57 00:54:18.425 lat (usec): min=306, max=41674, avg=2994.26, stdev=9978.56 00:54:18.425 clat percentiles (usec): 00:54:18.425 | 1.00th=[ 310], 5.00th=[ 318], 10.00th=[ 322], 20.00th=[ 330], 00:54:18.425 | 30.00th=[ 334], 40.00th=[ 343], 50.00th=[ 351], 60.00th=[ 359], 00:54:18.425 | 70.00th=[ 375], 80.00th=[ 392], 90.00th=[ 420], 95.00th=[40633], 00:54:18.425 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:54:18.425 | 99.99th=[41681] 00:54:18.425 bw ( KiB/s): min= 3872, max= 7040, per=100.00%, avg=5334.40, stdev=750.04, samples=20 00:54:18.425 iops : min= 968, max= 1760, avg=1333.60, stdev=187.51, samples=20 00:54:18.425 lat (usec) : 500=93.28%, 750=0.19% 00:54:18.425 lat (msec) : 4=0.03%, 50=6.51% 00:54:18.425 cpu : usr=94.24%, sys=5.15%, ctx=22, majf=0, minf=0 00:54:18.425 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:54:18.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:18.425 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:18.425 issued rwts: total=13340,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:54:18.425 latency : target=0, window=0, percentile=100.00%, depth=4 00:54:18.425 00:54:18.425 Run status group 0 (all jobs): 00:54:18.425 READ: bw=5323KiB/s (5450kB/s), 5323KiB/s-5323KiB/s (5450kB/s-5450kB/s), io=52.1MiB (54.6MB), run=10025-10025msec 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:18.425 00:54:18.425 real 0m10.968s 00:54:18.425 user 0m10.044s 00:54:18.425 sys 0m0.817s 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # xtrace_disable 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:54:18.425 ************************************ 00:54:18.425 END TEST fio_dif_1_default 00:54:18.425 ************************************ 00:54:18.425 15:07:36 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:54:18.425 15:07:36 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:54:18.425 15:07:36 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:54:18.425 15:07:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:54:18.425 ************************************ 00:54:18.425 START TEST fio_dif_1_multi_subsystems 00:54:18.425 ************************************ 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:54:18.425 bdev_null0 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:54:18.425 [2024-07-22 15:07:36.622783] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:54:18.425 bdev_null1 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:54:18.425 { 00:54:18.425 "params": { 00:54:18.425 "name": "Nvme$subsystem", 00:54:18.425 "trtype": "$TEST_TRANSPORT", 00:54:18.425 "traddr": "$NVMF_FIRST_TARGET_IP", 00:54:18.425 "adrfam": "ipv4", 00:54:18.425 "trsvcid": "$NVMF_PORT", 00:54:18.425 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:54:18.425 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:54:18.425 "hdgst": ${hdgst:-false}, 00:54:18.425 "ddgst": ${ddgst:-false} 00:54:18.425 }, 00:54:18.425 "method": "bdev_nvme_attach_controller" 00:54:18.425 } 00:54:18.425 EOF 00:54:18.425 )") 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:54:18.425 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:54:18.426 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:54:18.426 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:54:18.426 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:54:18.426 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:54:18.426 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local sanitizers 00:54:18.426 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:54:18.426 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # shift 00:54:18.426 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local asan_lib= 00:54:18.426 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:54:18.426 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:54:18.426 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:54:18.426 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:54:18.426 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:54:18.426 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:54:18.426 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:54:18.426 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:54:18.426 { 00:54:18.426 "params": { 00:54:18.426 "name": "Nvme$subsystem", 00:54:18.426 "trtype": "$TEST_TRANSPORT", 00:54:18.426 "traddr": "$NVMF_FIRST_TARGET_IP", 00:54:18.426 "adrfam": "ipv4", 00:54:18.426 "trsvcid": "$NVMF_PORT", 00:54:18.426 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:54:18.426 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:54:18.426 "hdgst": ${hdgst:-false}, 00:54:18.426 "ddgst": ${ddgst:-false} 00:54:18.426 }, 00:54:18.426 "method": "bdev_nvme_attach_controller" 00:54:18.426 } 00:54:18.426 EOF 00:54:18.426 )") 00:54:18.426 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libasan 00:54:18.426 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:54:18.426 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:54:18.426 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:54:18.426 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:54:18.426 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:54:18.426 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:54:18.426 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:54:18.426 "params": { 00:54:18.426 "name": "Nvme0", 00:54:18.426 "trtype": "tcp", 00:54:18.426 "traddr": "10.0.0.2", 00:54:18.426 "adrfam": "ipv4", 00:54:18.426 "trsvcid": "4420", 00:54:18.426 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:54:18.426 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:54:18.426 "hdgst": false, 00:54:18.426 "ddgst": false 00:54:18.426 }, 00:54:18.426 "method": "bdev_nvme_attach_controller" 00:54:18.426 },{ 00:54:18.426 "params": { 00:54:18.426 "name": "Nvme1", 00:54:18.426 "trtype": "tcp", 00:54:18.426 "traddr": "10.0.0.2", 00:54:18.426 "adrfam": "ipv4", 00:54:18.426 "trsvcid": "4420", 00:54:18.426 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:54:18.426 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:54:18.426 "hdgst": false, 00:54:18.426 "ddgst": false 00:54:18.426 }, 00:54:18.426 "method": "bdev_nvme_attach_controller" 00:54:18.426 }' 00:54:18.426 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:54:18.426 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:54:18.426 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:54:18.426 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:54:18.426 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:54:18.426 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:54:18.426 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:54:18.426 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:54:18.426 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:54:18.426 15:07:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:54:18.426 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:54:18.426 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:54:18.426 fio-3.35 00:54:18.426 Starting 2 threads 00:54:28.403 00:54:28.403 filename0: (groupid=0, jobs=1): err= 0: pid=115883: Mon Jul 22 15:07:47 2024 00:54:28.403 read: IOPS=196, BW=787KiB/s (806kB/s)(7904KiB/10039msec) 00:54:28.403 slat (nsec): min=5264, max=44851, avg=10271.30, stdev=6814.14 00:54:28.403 clat (usec): min=317, max=41527, avg=20287.82, stdev=20227.81 00:54:28.403 lat (usec): min=322, max=41535, avg=20298.09, stdev=20227.62 00:54:28.403 clat percentiles (usec): 00:54:28.403 | 1.00th=[ 334], 5.00th=[ 351], 10.00th=[ 363], 20.00th=[ 404], 00:54:28.403 | 30.00th=[ 437], 40.00th=[ 482], 50.00th=[ 799], 60.00th=[40633], 00:54:28.403 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:54:28.403 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:54:28.403 | 99.99th=[41681] 00:54:28.403 bw ( KiB/s): min= 544, max= 1120, per=50.55%, avg=788.80, stdev=135.08, samples=20 00:54:28.403 iops : min= 136, max= 280, avg=197.20, stdev=33.77, samples=20 00:54:28.403 lat (usec) : 500=42.66%, 750=5.41%, 1000=2.73% 00:54:28.403 lat (msec) : 10=0.20%, 50=48.99% 00:54:28.403 cpu : usr=97.70%, sys=1.89%, ctx=6, majf=0, minf=0 00:54:28.403 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:54:28.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:28.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:28.403 issued rwts: total=1976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:54:28.403 latency : target=0, window=0, percentile=100.00%, depth=4 00:54:28.403 filename1: (groupid=0, jobs=1): err= 0: pid=115884: Mon Jul 22 15:07:47 2024 00:54:28.403 read: IOPS=192, BW=771KiB/s (790kB/s)(7744KiB/10038msec) 00:54:28.403 slat (nsec): min=5777, max=65730, avg=10693.54, stdev=7291.48 00:54:28.403 clat (usec): min=325, max=42558, avg=20703.35, stdev=20233.88 00:54:28.403 lat (usec): min=331, max=42567, avg=20714.05, stdev=20233.28 00:54:28.403 clat percentiles (usec): 00:54:28.403 | 1.00th=[ 343], 5.00th=[ 359], 10.00th=[ 375], 20.00th=[ 412], 00:54:28.403 | 30.00th=[ 441], 40.00th=[ 482], 50.00th=[ 4178], 60.00th=[40633], 00:54:28.403 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:54:28.403 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 00:54:28.403 | 99.99th=[42730] 00:54:28.403 bw ( KiB/s): min= 576, max= 1440, per=49.53%, avg=772.80, stdev=197.06, samples=20 00:54:28.403 iops : min= 144, max= 360, avg=193.20, stdev=49.27, samples=20 00:54:28.403 lat (usec) : 500=41.84%, 750=3.98%, 1000=3.98% 00:54:28.403 lat (msec) : 10=0.21%, 50=50.00% 00:54:28.403 cpu : usr=97.53%, sys=2.04%, ctx=9, majf=0, minf=9 00:54:28.403 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:54:28.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:28.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:28.403 issued rwts: total=1936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:54:28.403 latency : target=0, window=0, percentile=100.00%, depth=4 00:54:28.403 00:54:28.404 Run status group 0 (all jobs): 00:54:28.404 READ: bw=1559KiB/s (1596kB/s), 771KiB/s-787KiB/s (790kB/s-806kB/s), io=15.3MiB (16.0MB), run=10038-10039msec 00:54:28.404 15:07:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:54:28.404 15:07:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:54:28.404 15:07:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:54:28.404 15:07:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:54:28.404 15:07:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:54:28.404 15:07:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:54:28.404 15:07:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:28.404 15:07:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:54:28.404 15:07:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:28.404 15:07:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:54:28.404 15:07:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:28.404 15:07:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:54:28.404 15:07:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:28.404 15:07:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:54:28.404 15:07:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:54:28.404 15:07:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:54:28.404 15:07:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:54:28.404 15:07:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:28.404 15:07:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:54:28.404 15:07:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:28.404 15:07:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:54:28.404 15:07:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:28.404 15:07:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:54:28.404 ************************************ 00:54:28.404 END TEST fio_dif_1_multi_subsystems 00:54:28.404 ************************************ 00:54:28.404 15:07:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:28.404 00:54:28.404 real 0m11.171s 00:54:28.404 user 0m20.340s 00:54:28.404 sys 0m0.678s 00:54:28.404 15:07:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # xtrace_disable 00:54:28.404 15:07:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:54:28.404 15:07:47 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:54:28.404 15:07:47 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:54:28.404 15:07:47 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:54:28.404 15:07:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:54:28.404 ************************************ 00:54:28.404 START TEST fio_dif_rand_params 00:54:28.404 ************************************ 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:28.404 bdev_null0 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:28.404 [2024-07-22 15:07:47.852450] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:54:28.404 { 00:54:28.404 "params": { 00:54:28.404 "name": "Nvme$subsystem", 00:54:28.404 "trtype": "$TEST_TRANSPORT", 00:54:28.404 "traddr": "$NVMF_FIRST_TARGET_IP", 00:54:28.404 "adrfam": "ipv4", 00:54:28.404 "trsvcid": "$NVMF_PORT", 00:54:28.404 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:54:28.404 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:54:28.404 "hdgst": ${hdgst:-false}, 00:54:28.404 "ddgst": ${ddgst:-false} 00:54:28.404 }, 00:54:28.404 "method": "bdev_nvme_attach_controller" 00:54:28.404 } 00:54:28.404 EOF 00:54:28.404 )") 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:54:28.404 "params": { 00:54:28.404 "name": "Nvme0", 00:54:28.404 "trtype": "tcp", 00:54:28.404 "traddr": "10.0.0.2", 00:54:28.404 "adrfam": "ipv4", 00:54:28.404 "trsvcid": "4420", 00:54:28.404 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:54:28.404 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:54:28.404 "hdgst": false, 00:54:28.404 "ddgst": false 00:54:28.404 }, 00:54:28.404 "method": "bdev_nvme_attach_controller" 00:54:28.404 }' 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:54:28.404 15:07:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:54:28.405 15:07:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:54:28.405 15:07:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:54:28.663 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:54:28.663 ... 00:54:28.663 fio-3.35 00:54:28.663 Starting 3 threads 00:54:35.224 00:54:35.225 filename0: (groupid=0, jobs=1): err= 0: pid=116036: Mon Jul 22 15:07:53 2024 00:54:35.225 read: IOPS=247, BW=31.0MiB/s (32.5MB/s)(155MiB/5005msec) 00:54:35.225 slat (nsec): min=4246, max=78909, avg=14845.97, stdev=6897.89 00:54:35.225 clat (usec): min=3617, max=20469, avg=12082.55, stdev=3357.25 00:54:35.225 lat (usec): min=3626, max=20502, avg=12097.39, stdev=3356.75 00:54:35.225 clat percentiles (usec): 00:54:35.225 | 1.00th=[ 3949], 5.00th=[ 4293], 10.00th=[ 7767], 20.00th=[ 8586], 00:54:35.225 | 30.00th=[10028], 40.00th=[12780], 50.00th=[13566], 60.00th=[14091], 00:54:35.225 | 70.00th=[14484], 80.00th=[14746], 90.00th=[15139], 95.00th=[15533], 00:54:35.225 | 99.00th=[16188], 99.50th=[16450], 99.90th=[20317], 99.95th=[20579], 00:54:35.225 | 99.99th=[20579] 00:54:35.225 bw ( KiB/s): min=26112, max=44544, per=32.52%, avg=31675.90, stdev=6825.79, samples=10 00:54:35.225 iops : min= 204, max= 348, avg=247.40, stdev=53.19, samples=10 00:54:35.225 lat (msec) : 4=1.37%, 10=28.55%, 20=69.84%, 50=0.24% 00:54:35.225 cpu : usr=94.72%, sys=3.86%, ctx=12, majf=0, minf=0 00:54:35.225 IO depths : 1=8.7%, 2=91.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:54:35.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:35.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:35.225 issued rwts: total=1240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:54:35.225 latency : target=0, window=0, percentile=100.00%, depth=3 00:54:35.225 filename0: (groupid=0, jobs=1): err= 0: pid=116037: Mon Jul 22 15:07:53 2024 00:54:35.225 read: IOPS=250, BW=31.3MiB/s (32.8MB/s)(157MiB/5009msec) 00:54:35.225 slat (usec): min=5, max=294, avg=14.41, stdev= 9.88 00:54:35.225 clat (usec): min=6260, max=52793, avg=11943.88, stdev=8719.50 00:54:35.225 lat (usec): min=6275, max=52823, avg=11958.29, stdev=8720.01 00:54:35.225 clat percentiles (usec): 00:54:35.225 | 1.00th=[ 6783], 5.00th=[ 7767], 10.00th=[ 8848], 20.00th=[ 9503], 00:54:35.225 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10159], 60.00th=[10421], 00:54:35.225 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11338], 95.00th=[12256], 00:54:35.225 | 99.00th=[51119], 99.50th=[51643], 99.90th=[52691], 99.95th=[52691], 00:54:35.225 | 99.99th=[52691] 00:54:35.225 bw ( KiB/s): min=19968, max=39168, per=32.93%, avg=32076.80, stdev=6964.01, samples=10 00:54:35.225 iops : min= 156, max= 306, avg=250.60, stdev=54.41, samples=10 00:54:35.225 lat (msec) : 10=40.88%, 20=54.34%, 50=1.20%, 100=3.59% 00:54:35.225 cpu : usr=95.35%, sys=3.41%, ctx=3, majf=0, minf=0 00:54:35.225 IO depths : 1=3.7%, 2=96.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:54:35.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:35.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:35.225 issued rwts: total=1255,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:54:35.225 latency : target=0, window=0, percentile=100.00%, depth=3 00:54:35.225 filename0: (groupid=0, jobs=1): err= 0: pid=116038: Mon Jul 22 15:07:53 2024 00:54:35.225 read: IOPS=263, BW=32.9MiB/s (34.5MB/s)(165MiB/5006msec) 00:54:35.225 slat (usec): min=6, max=276, avg=14.56, stdev= 9.98 00:54:35.225 clat (usec): min=4906, max=53683, avg=11379.52, stdev=5992.50 00:54:35.225 lat (usec): min=4933, max=53696, avg=11394.08, stdev=5992.81 00:54:35.225 clat percentiles (usec): 00:54:35.225 | 1.00th=[ 6128], 5.00th=[ 6718], 10.00th=[ 7242], 20.00th=[ 8291], 00:54:35.225 | 30.00th=[10290], 40.00th=[10814], 50.00th=[11207], 60.00th=[11469], 00:54:35.225 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12780], 95.00th=[13042], 00:54:35.225 | 99.00th=[50070], 99.50th=[52691], 99.90th=[53216], 99.95th=[53740], 00:54:35.225 | 99.99th=[53740] 00:54:35.225 bw ( KiB/s): min=26880, max=39168, per=34.53%, avg=33638.40, stdev=3829.94, samples=10 00:54:35.225 iops : min= 210, max= 306, avg=262.80, stdev=29.92, samples=10 00:54:35.225 lat (msec) : 10=25.89%, 20=72.06%, 50=0.99%, 100=1.06% 00:54:35.225 cpu : usr=95.56%, sys=3.28%, ctx=8, majf=0, minf=0 00:54:35.225 IO depths : 1=5.5%, 2=94.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:54:35.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:35.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:35.225 issued rwts: total=1317,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:54:35.225 latency : target=0, window=0, percentile=100.00%, depth=3 00:54:35.225 00:54:35.225 Run status group 0 (all jobs): 00:54:35.225 READ: bw=95.1MiB/s (99.7MB/s), 31.0MiB/s-32.9MiB/s (32.5MB/s-34.5MB/s), io=477MiB (500MB), run=5005-5009msec 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:35.225 bdev_null0 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:35.225 [2024-07-22 15:07:53.853359] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:35.225 bdev_null1 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:35.225 bdev_null2 00:54:35.225 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:54:35.226 { 00:54:35.226 "params": { 00:54:35.226 "name": "Nvme$subsystem", 00:54:35.226 "trtype": "$TEST_TRANSPORT", 00:54:35.226 "traddr": "$NVMF_FIRST_TARGET_IP", 00:54:35.226 "adrfam": "ipv4", 00:54:35.226 "trsvcid": "$NVMF_PORT", 00:54:35.226 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:54:35.226 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:54:35.226 "hdgst": ${hdgst:-false}, 00:54:35.226 "ddgst": ${ddgst:-false} 00:54:35.226 }, 00:54:35.226 "method": "bdev_nvme_attach_controller" 00:54:35.226 } 00:54:35.226 EOF 00:54:35.226 )") 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:54:35.226 { 00:54:35.226 "params": { 00:54:35.226 "name": "Nvme$subsystem", 00:54:35.226 "trtype": "$TEST_TRANSPORT", 00:54:35.226 "traddr": "$NVMF_FIRST_TARGET_IP", 00:54:35.226 "adrfam": "ipv4", 00:54:35.226 "trsvcid": "$NVMF_PORT", 00:54:35.226 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:54:35.226 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:54:35.226 "hdgst": ${hdgst:-false}, 00:54:35.226 "ddgst": ${ddgst:-false} 00:54:35.226 }, 00:54:35.226 "method": "bdev_nvme_attach_controller" 00:54:35.226 } 00:54:35.226 EOF 00:54:35.226 )") 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:54:35.226 { 00:54:35.226 "params": { 00:54:35.226 "name": "Nvme$subsystem", 00:54:35.226 "trtype": "$TEST_TRANSPORT", 00:54:35.226 "traddr": "$NVMF_FIRST_TARGET_IP", 00:54:35.226 "adrfam": "ipv4", 00:54:35.226 "trsvcid": "$NVMF_PORT", 00:54:35.226 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:54:35.226 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:54:35.226 "hdgst": ${hdgst:-false}, 00:54:35.226 "ddgst": ${ddgst:-false} 00:54:35.226 }, 00:54:35.226 "method": "bdev_nvme_attach_controller" 00:54:35.226 } 00:54:35.226 EOF 00:54:35.226 )") 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:54:35.226 "params": { 00:54:35.226 "name": "Nvme0", 00:54:35.226 "trtype": "tcp", 00:54:35.226 "traddr": "10.0.0.2", 00:54:35.226 "adrfam": "ipv4", 00:54:35.226 "trsvcid": "4420", 00:54:35.226 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:54:35.226 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:54:35.226 "hdgst": false, 00:54:35.226 "ddgst": false 00:54:35.226 }, 00:54:35.226 "method": "bdev_nvme_attach_controller" 00:54:35.226 },{ 00:54:35.226 "params": { 00:54:35.226 "name": "Nvme1", 00:54:35.226 "trtype": "tcp", 00:54:35.226 "traddr": "10.0.0.2", 00:54:35.226 "adrfam": "ipv4", 00:54:35.226 "trsvcid": "4420", 00:54:35.226 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:54:35.226 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:54:35.226 "hdgst": false, 00:54:35.226 "ddgst": false 00:54:35.226 }, 00:54:35.226 "method": "bdev_nvme_attach_controller" 00:54:35.226 },{ 00:54:35.226 "params": { 00:54:35.226 "name": "Nvme2", 00:54:35.226 "trtype": "tcp", 00:54:35.226 "traddr": "10.0.0.2", 00:54:35.226 "adrfam": "ipv4", 00:54:35.226 "trsvcid": "4420", 00:54:35.226 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:54:35.226 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:54:35.226 "hdgst": false, 00:54:35.226 "ddgst": false 00:54:35.226 }, 00:54:35.226 "method": "bdev_nvme_attach_controller" 00:54:35.226 }' 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:54:35.226 15:07:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:54:35.226 15:07:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:54:35.226 15:07:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:54:35.226 15:07:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:54:35.226 15:07:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:54:35.226 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:54:35.226 ... 00:54:35.226 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:54:35.226 ... 00:54:35.226 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:54:35.226 ... 00:54:35.226 fio-3.35 00:54:35.226 Starting 24 threads 00:54:47.499 00:54:47.499 filename0: (groupid=0, jobs=1): err= 0: pid=116134: Mon Jul 22 15:08:04 2024 00:54:47.499 read: IOPS=244, BW=977KiB/s (1000kB/s)(9776KiB/10006msec) 00:54:47.499 slat (usec): min=7, max=8019, avg=23.29, stdev=221.22 00:54:47.499 clat (msec): min=3, max=143, avg=65.36, stdev=23.70 00:54:47.499 lat (msec): min=3, max=143, avg=65.39, stdev=23.70 00:54:47.499 clat percentiles (msec): 00:54:47.499 | 1.00th=[ 5], 5.00th=[ 35], 10.00th=[ 40], 20.00th=[ 48], 00:54:47.499 | 30.00th=[ 50], 40.00th=[ 59], 50.00th=[ 63], 60.00th=[ 69], 00:54:47.499 | 70.00th=[ 72], 80.00th=[ 85], 90.00th=[ 97], 95.00th=[ 110], 00:54:47.499 | 99.00th=[ 126], 99.50th=[ 138], 99.90th=[ 144], 99.95th=[ 144], 00:54:47.499 | 99.99th=[ 144] 00:54:47.499 bw ( KiB/s): min= 688, max= 1536, per=4.61%, avg=971.20, stdev=226.86, samples=20 00:54:47.499 iops : min= 172, max= 384, avg=242.80, stdev=56.71, samples=20 00:54:47.499 lat (msec) : 4=0.65%, 10=1.31%, 20=0.41%, 50=28.15%, 100=61.50% 00:54:47.499 lat (msec) : 250=7.98% 00:54:47.499 cpu : usr=37.30%, sys=0.56%, ctx=1107, majf=0, minf=0 00:54:47.499 IO depths : 1=1.0%, 2=2.1%, 4=9.7%, 8=75.1%, 16=12.1%, 32=0.0%, >=64=0.0% 00:54:47.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:47.499 complete : 0=0.0%, 4=89.7%, 8=5.3%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:47.499 issued rwts: total=2444,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:54:47.499 latency : target=0, window=0, percentile=100.00%, depth=16 00:54:47.499 filename0: (groupid=0, jobs=1): err= 0: pid=116135: Mon Jul 22 15:08:04 2024 00:54:47.499 read: IOPS=205, BW=823KiB/s (843kB/s)(8240KiB/10013msec) 00:54:47.499 slat (usec): min=3, max=8043, avg=24.96, stdev=318.14 00:54:47.499 clat (msec): min=35, max=159, avg=77.60, stdev=22.31 00:54:47.499 lat (msec): min=35, max=159, avg=77.62, stdev=22.31 00:54:47.499 clat percentiles (msec): 00:54:47.499 | 1.00th=[ 38], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 61], 00:54:47.499 | 30.00th=[ 64], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 82], 00:54:47.499 | 70.00th=[ 88], 80.00th=[ 95], 90.00th=[ 106], 95.00th=[ 120], 00:54:47.499 | 99.00th=[ 146], 99.50th=[ 155], 99.90th=[ 161], 99.95th=[ 161], 00:54:47.499 | 99.99th=[ 161] 00:54:47.499 bw ( KiB/s): min= 552, max= 1048, per=3.88%, avg=817.70, stdev=134.15, samples=20 00:54:47.499 iops : min= 138, max= 262, avg=204.40, stdev=33.50, samples=20 00:54:47.499 lat (msec) : 50=11.17%, 100=73.45%, 250=15.39% 00:54:47.499 cpu : usr=33.59%, sys=0.56%, ctx=1019, majf=0, minf=9 00:54:47.499 IO depths : 1=1.9%, 2=4.1%, 4=12.3%, 8=70.0%, 16=11.7%, 32=0.0%, >=64=0.0% 00:54:47.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:47.499 complete : 0=0.0%, 4=90.6%, 8=4.8%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:47.499 issued rwts: total=2060,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:54:47.499 latency : target=0, window=0, percentile=100.00%, depth=16 00:54:47.499 filename0: (groupid=0, jobs=1): err= 0: pid=116136: Mon Jul 22 15:08:04 2024 00:54:47.499 read: IOPS=216, BW=867KiB/s (887kB/s)(8708KiB/10048msec) 00:54:47.499 slat (usec): min=7, max=8019, avg=18.91, stdev=183.54 00:54:47.499 clat (msec): min=29, max=179, avg=73.64, stdev=22.32 00:54:47.499 lat (msec): min=29, max=179, avg=73.66, stdev=22.33 00:54:47.499 clat percentiles (msec): 00:54:47.499 | 1.00th=[ 34], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 56], 00:54:47.499 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 77], 00:54:47.499 | 70.00th=[ 84], 80.00th=[ 91], 90.00th=[ 103], 95.00th=[ 112], 00:54:47.499 | 99.00th=[ 140], 99.50th=[ 157], 99.90th=[ 180], 99.95th=[ 180], 00:54:47.499 | 99.99th=[ 180] 00:54:47.499 bw ( KiB/s): min= 608, max= 1216, per=4.11%, avg=866.30, stdev=126.61, samples=20 00:54:47.499 iops : min= 152, max= 304, avg=216.55, stdev=31.64, samples=20 00:54:47.499 lat (msec) : 50=16.58%, 100=72.49%, 250=10.93% 00:54:47.499 cpu : usr=37.14%, sys=0.47%, ctx=1097, majf=0, minf=9 00:54:47.499 IO depths : 1=1.0%, 2=2.4%, 4=8.4%, 8=74.2%, 16=14.0%, 32=0.0%, >=64=0.0% 00:54:47.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:47.499 complete : 0=0.0%, 4=90.1%, 8=6.6%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:47.499 issued rwts: total=2177,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:54:47.499 latency : target=0, window=0, percentile=100.00%, depth=16 00:54:47.499 filename0: (groupid=0, jobs=1): err= 0: pid=116137: Mon Jul 22 15:08:04 2024 00:54:47.499 read: IOPS=213, BW=853KiB/s (873kB/s)(8568KiB/10050msec) 00:54:47.499 slat (nsec): min=7025, max=59276, avg=15412.63, stdev=10136.90 00:54:47.499 clat (msec): min=29, max=159, avg=74.86, stdev=23.00 00:54:47.499 lat (msec): min=29, max=159, avg=74.88, stdev=23.00 00:54:47.499 clat percentiles (msec): 00:54:47.499 | 1.00th=[ 35], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 58], 00:54:47.499 | 30.00th=[ 61], 40.00th=[ 66], 50.00th=[ 72], 60.00th=[ 77], 00:54:47.499 | 70.00th=[ 85], 80.00th=[ 94], 90.00th=[ 107], 95.00th=[ 121], 00:54:47.499 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 153], 99.95th=[ 153], 00:54:47.499 | 99.99th=[ 161] 00:54:47.499 bw ( KiB/s): min= 512, max= 1025, per=4.04%, avg=852.20, stdev=134.08, samples=20 00:54:47.499 iops : min= 128, max= 256, avg=213.00, stdev=33.54, samples=20 00:54:47.499 lat (msec) : 50=13.21%, 100=74.70%, 250=12.09% 00:54:47.499 cpu : usr=34.06%, sys=0.39%, ctx=891, majf=0, minf=9 00:54:47.499 IO depths : 1=1.1%, 2=2.3%, 4=9.4%, 8=73.9%, 16=13.3%, 32=0.0%, >=64=0.0% 00:54:47.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:47.499 complete : 0=0.0%, 4=89.9%, 8=6.2%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:47.499 issued rwts: total=2142,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:54:47.499 latency : target=0, window=0, percentile=100.00%, depth=16 00:54:47.499 filename0: (groupid=0, jobs=1): err= 0: pid=116138: Mon Jul 22 15:08:04 2024 00:54:47.499 read: IOPS=197, BW=791KiB/s (810kB/s)(7932KiB/10033msec) 00:54:47.499 slat (usec): min=6, max=4020, avg=16.23, stdev=90.44 00:54:47.499 clat (msec): min=34, max=174, avg=80.77, stdev=23.20 00:54:47.499 lat (msec): min=34, max=174, avg=80.79, stdev=23.21 00:54:47.499 clat percentiles (msec): 00:54:47.499 | 1.00th=[ 39], 5.00th=[ 47], 10.00th=[ 56], 20.00th=[ 61], 00:54:47.499 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 80], 60.00th=[ 84], 00:54:47.499 | 70.00th=[ 89], 80.00th=[ 96], 90.00th=[ 117], 95.00th=[ 126], 00:54:47.499 | 99.00th=[ 144], 99.50th=[ 167], 99.90th=[ 176], 99.95th=[ 176], 00:54:47.499 | 99.99th=[ 176] 00:54:47.499 bw ( KiB/s): min= 512, max= 1024, per=3.73%, avg=786.80, stdev=120.67, samples=20 00:54:47.499 iops : min= 128, max= 256, avg=196.70, stdev=30.17, samples=20 00:54:47.499 lat (msec) : 50=8.93%, 100=73.68%, 250=17.40% 00:54:47.499 cpu : usr=33.94%, sys=0.61%, ctx=972, majf=0, minf=9 00:54:47.499 IO depths : 1=2.1%, 2=4.5%, 4=14.5%, 8=67.8%, 16=11.1%, 32=0.0%, >=64=0.0% 00:54:47.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:47.499 complete : 0=0.0%, 4=90.7%, 8=4.4%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:47.499 issued rwts: total=1983,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:54:47.499 latency : target=0, window=0, percentile=100.00%, depth=16 00:54:47.499 filename0: (groupid=0, jobs=1): err= 0: pid=116139: Mon Jul 22 15:08:04 2024 00:54:47.499 read: IOPS=193, BW=774KiB/s (792kB/s)(7740KiB/10005msec) 00:54:47.499 slat (usec): min=6, max=8063, avg=22.15, stdev=258.02 00:54:47.499 clat (msec): min=11, max=154, avg=82.55, stdev=23.32 00:54:47.499 lat (msec): min=11, max=154, avg=82.57, stdev=23.33 00:54:47.499 clat percentiles (msec): 00:54:47.499 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 56], 20.00th=[ 63], 00:54:47.500 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 82], 60.00th=[ 86], 00:54:47.500 | 70.00th=[ 95], 80.00th=[ 104], 90.00th=[ 118], 95.00th=[ 122], 00:54:47.500 | 99.00th=[ 142], 99.50th=[ 153], 99.90th=[ 155], 99.95th=[ 155], 00:54:47.500 | 99.99th=[ 155] 00:54:47.500 bw ( KiB/s): min= 513, max= 1024, per=3.62%, avg=763.16, stdev=115.94, samples=19 00:54:47.500 iops : min= 128, max= 256, avg=190.74, stdev=29.02, samples=19 00:54:47.500 lat (msec) : 20=0.52%, 50=7.24%, 100=70.54%, 250=21.71% 00:54:47.500 cpu : usr=32.61%, sys=0.50%, ctx=911, majf=0, minf=9 00:54:47.500 IO depths : 1=1.9%, 2=4.3%, 4=13.8%, 8=68.7%, 16=11.2%, 32=0.0%, >=64=0.0% 00:54:47.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:47.500 complete : 0=0.0%, 4=90.8%, 8=4.1%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:47.500 issued rwts: total=1935,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:54:47.500 latency : target=0, window=0, percentile=100.00%, depth=16 00:54:47.500 filename0: (groupid=0, jobs=1): err= 0: pid=116140: Mon Jul 22 15:08:04 2024 00:54:47.500 read: IOPS=240, BW=964KiB/s (987kB/s)(9704KiB/10068msec) 00:54:47.500 slat (usec): min=6, max=5018, avg=27.17, stdev=211.25 00:54:47.500 clat (msec): min=5, max=135, avg=66.05, stdev=22.94 00:54:47.500 lat (msec): min=5, max=135, avg=66.08, stdev=22.95 00:54:47.500 clat percentiles (msec): 00:54:47.500 | 1.00th=[ 9], 5.00th=[ 34], 10.00th=[ 40], 20.00th=[ 46], 00:54:47.500 | 30.00th=[ 52], 40.00th=[ 61], 50.00th=[ 66], 60.00th=[ 70], 00:54:47.500 | 70.00th=[ 77], 80.00th=[ 86], 90.00th=[ 96], 95.00th=[ 106], 00:54:47.500 | 99.00th=[ 128], 99.50th=[ 128], 99.90th=[ 136], 99.95th=[ 136], 00:54:47.500 | 99.99th=[ 136] 00:54:47.500 bw ( KiB/s): min= 636, max= 1296, per=4.57%, avg=963.80, stdev=204.82, samples=20 00:54:47.500 iops : min= 159, max= 324, avg=240.95, stdev=51.20, samples=20 00:54:47.500 lat (msec) : 10=1.32%, 20=1.24%, 50=26.71%, 100=62.28%, 250=8.45% 00:54:47.500 cpu : usr=42.04%, sys=0.69%, ctx=1357, majf=0, minf=9 00:54:47.500 IO depths : 1=1.7%, 2=3.8%, 4=12.7%, 8=70.3%, 16=11.4%, 32=0.0%, >=64=0.0% 00:54:47.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:47.500 complete : 0=0.0%, 4=90.5%, 8=4.5%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:47.500 issued rwts: total=2426,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:54:47.500 latency : target=0, window=0, percentile=100.00%, depth=16 00:54:47.500 filename0: (groupid=0, jobs=1): err= 0: pid=116141: Mon Jul 22 15:08:04 2024 00:54:47.500 read: IOPS=243, BW=973KiB/s (996kB/s)(9776KiB/10050msec) 00:54:47.500 slat (usec): min=4, max=8015, avg=19.53, stdev=239.96 00:54:47.500 clat (msec): min=33, max=144, avg=65.60, stdev=22.60 00:54:47.500 lat (msec): min=33, max=144, avg=65.62, stdev=22.60 00:54:47.500 clat percentiles (msec): 00:54:47.500 | 1.00th=[ 34], 5.00th=[ 37], 10.00th=[ 40], 20.00th=[ 46], 00:54:47.500 | 30.00th=[ 48], 40.00th=[ 56], 50.00th=[ 61], 60.00th=[ 69], 00:54:47.500 | 70.00th=[ 75], 80.00th=[ 86], 90.00th=[ 99], 95.00th=[ 109], 00:54:47.500 | 99.00th=[ 127], 99.50th=[ 129], 99.90th=[ 144], 99.95th=[ 144], 00:54:47.500 | 99.99th=[ 144] 00:54:47.500 bw ( KiB/s): min= 560, max= 1216, per=4.61%, avg=971.15, stdev=189.59, samples=20 00:54:47.500 iops : min= 140, max= 304, avg=242.75, stdev=47.46, samples=20 00:54:47.500 lat (msec) : 50=33.92%, 100=57.90%, 250=8.18% 00:54:47.500 cpu : usr=41.84%, sys=0.63%, ctx=1193, majf=0, minf=9 00:54:47.500 IO depths : 1=1.7%, 2=3.9%, 4=11.9%, 8=71.0%, 16=11.5%, 32=0.0%, >=64=0.0% 00:54:47.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:47.500 complete : 0=0.0%, 4=90.5%, 8=4.6%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:47.500 issued rwts: total=2444,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:54:47.500 latency : target=0, window=0, percentile=100.00%, depth=16 00:54:47.500 filename1: (groupid=0, jobs=1): err= 0: pid=116142: Mon Jul 22 15:08:04 2024 00:54:47.500 read: IOPS=234, BW=936KiB/s (958kB/s)(9404KiB/10047msec) 00:54:47.500 slat (usec): min=6, max=4020, avg=15.34, stdev=124.13 00:54:47.500 clat (msec): min=23, max=163, avg=68.18, stdev=21.71 00:54:47.500 lat (msec): min=23, max=163, avg=68.19, stdev=21.71 00:54:47.500 clat percentiles (msec): 00:54:47.500 | 1.00th=[ 35], 5.00th=[ 40], 10.00th=[ 44], 20.00th=[ 48], 00:54:47.500 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 66], 60.00th=[ 70], 00:54:47.500 | 70.00th=[ 78], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 108], 00:54:47.500 | 99.00th=[ 134], 99.50th=[ 144], 99.90th=[ 163], 99.95th=[ 163], 00:54:47.500 | 99.99th=[ 163] 00:54:47.500 bw ( KiB/s): min= 688, max= 1248, per=4.44%, avg=935.70, stdev=140.94, samples=20 00:54:47.500 iops : min= 172, max= 312, avg=233.90, stdev=35.24, samples=20 00:54:47.500 lat (msec) : 50=22.88%, 100=69.08%, 250=8.04% 00:54:47.500 cpu : usr=43.49%, sys=0.68%, ctx=1456, majf=0, minf=9 00:54:47.500 IO depths : 1=1.6%, 2=3.7%, 4=12.2%, 8=70.6%, 16=11.9%, 32=0.0%, >=64=0.0% 00:54:47.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:47.500 complete : 0=0.0%, 4=90.4%, 8=4.9%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:47.500 issued rwts: total=2351,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:54:47.500 latency : target=0, window=0, percentile=100.00%, depth=16 00:54:47.500 filename1: (groupid=0, jobs=1): err= 0: pid=116143: Mon Jul 22 15:08:04 2024 00:54:47.500 read: IOPS=253, BW=1013KiB/s (1037kB/s)(9.95MiB/10055msec) 00:54:47.500 slat (usec): min=6, max=8037, avg=22.28, stdev=286.42 00:54:47.500 clat (msec): min=15, max=155, avg=62.89, stdev=21.21 00:54:47.500 lat (msec): min=15, max=155, avg=62.91, stdev=21.21 00:54:47.500 clat percentiles (msec): 00:54:47.500 | 1.00th=[ 23], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 47], 00:54:47.500 | 30.00th=[ 50], 40.00th=[ 56], 50.00th=[ 61], 60.00th=[ 65], 00:54:47.500 | 70.00th=[ 70], 80.00th=[ 77], 90.00th=[ 91], 95.00th=[ 106], 00:54:47.500 | 99.00th=[ 128], 99.50th=[ 132], 99.90th=[ 155], 99.95th=[ 157], 00:54:47.500 | 99.99th=[ 157] 00:54:47.500 bw ( KiB/s): min= 638, max= 1376, per=4.80%, avg=1012.00, stdev=181.68, samples=20 00:54:47.500 iops : min= 159, max= 344, avg=252.95, stdev=45.47, samples=20 00:54:47.500 lat (msec) : 20=0.63%, 50=32.05%, 100=61.47%, 250=5.85% 00:54:47.500 cpu : usr=42.74%, sys=0.64%, ctx=1169, majf=0, minf=9 00:54:47.500 IO depths : 1=1.5%, 2=3.2%, 4=10.6%, 8=72.8%, 16=11.9%, 32=0.0%, >=64=0.0% 00:54:47.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:47.500 complete : 0=0.0%, 4=90.2%, 8=5.1%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:47.500 issued rwts: total=2546,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:54:47.500 latency : target=0, window=0, percentile=100.00%, depth=16 00:54:47.500 filename1: (groupid=0, jobs=1): err= 0: pid=116144: Mon Jul 22 15:08:04 2024 00:54:47.500 read: IOPS=247, BW=989KiB/s (1013kB/s)(9952KiB/10062msec) 00:54:47.500 slat (usec): min=4, max=8043, avg=21.12, stdev=278.12 00:54:47.500 clat (msec): min=5, max=144, avg=64.36, stdev=23.96 00:54:47.500 lat (msec): min=5, max=144, avg=64.38, stdev=23.96 00:54:47.500 clat percentiles (msec): 00:54:47.500 | 1.00th=[ 9], 5.00th=[ 35], 10.00th=[ 39], 20.00th=[ 47], 00:54:47.500 | 30.00th=[ 50], 40.00th=[ 57], 50.00th=[ 61], 60.00th=[ 66], 00:54:47.500 | 70.00th=[ 72], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 111], 00:54:47.500 | 99.00th=[ 130], 99.50th=[ 140], 99.90th=[ 144], 99.95th=[ 144], 00:54:47.500 | 99.99th=[ 144] 00:54:47.500 bw ( KiB/s): min= 640, max= 1280, per=4.71%, avg=992.70, stdev=155.57, samples=20 00:54:47.500 iops : min= 160, max= 320, avg=248.15, stdev=38.94, samples=20 00:54:47.500 lat (msec) : 10=1.85%, 20=0.72%, 50=29.98%, 100=60.29%, 250=7.15% 00:54:47.500 cpu : usr=37.96%, sys=0.44%, ctx=1041, majf=0, minf=9 00:54:47.500 IO depths : 1=1.1%, 2=2.5%, 4=9.2%, 8=74.9%, 16=12.3%, 32=0.0%, >=64=0.0% 00:54:47.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:47.500 complete : 0=0.0%, 4=89.9%, 8=5.3%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:47.500 issued rwts: total=2488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:54:47.500 latency : target=0, window=0, percentile=100.00%, depth=16 00:54:47.500 filename1: (groupid=0, jobs=1): err= 0: pid=116145: Mon Jul 22 15:08:04 2024 00:54:47.500 read: IOPS=238, BW=952KiB/s (975kB/s)(9584KiB/10063msec) 00:54:47.500 slat (usec): min=7, max=8039, avg=29.15, stdev=365.73 00:54:47.500 clat (msec): min=4, max=135, avg=66.81, stdev=24.14 00:54:47.500 lat (msec): min=4, max=135, avg=66.84, stdev=24.15 00:54:47.500 clat percentiles (msec): 00:54:47.500 | 1.00th=[ 7], 5.00th=[ 35], 10.00th=[ 37], 20.00th=[ 48], 00:54:47.500 | 30.00th=[ 52], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 71], 00:54:47.500 | 70.00th=[ 75], 80.00th=[ 86], 90.00th=[ 99], 95.00th=[ 112], 00:54:47.500 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 136], 99.95th=[ 136], 00:54:47.500 | 99.99th=[ 136] 00:54:47.500 bw ( KiB/s): min= 636, max= 1410, per=4.53%, avg=955.90, stdev=174.22, samples=20 00:54:47.500 iops : min= 159, max= 352, avg=238.95, stdev=43.49, samples=20 00:54:47.500 lat (msec) : 10=2.00%, 50=26.09%, 100=63.15%, 250=8.76% 00:54:47.500 cpu : usr=34.32%, sys=0.42%, ctx=903, majf=0, minf=9 00:54:47.500 IO depths : 1=0.7%, 2=1.5%, 4=6.8%, 8=77.0%, 16=14.1%, 32=0.0%, >=64=0.0% 00:54:47.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:47.500 complete : 0=0.0%, 4=89.7%, 8=6.9%, 16=3.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:47.500 issued rwts: total=2396,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:54:47.500 latency : target=0, window=0, percentile=100.00%, depth=16 00:54:47.500 filename1: (groupid=0, jobs=1): err= 0: pid=116146: Mon Jul 22 15:08:04 2024 00:54:47.501 read: IOPS=219, BW=878KiB/s (899kB/s)(8824KiB/10052msec) 00:54:47.501 slat (nsec): min=7038, max=62342, avg=11461.56, stdev=4850.87 00:54:47.501 clat (msec): min=32, max=155, avg=72.74, stdev=20.19 00:54:47.501 lat (msec): min=32, max=155, avg=72.75, stdev=20.19 00:54:47.501 clat percentiles (msec): 00:54:47.501 | 1.00th=[ 35], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 55], 00:54:47.501 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 71], 60.00th=[ 74], 00:54:47.501 | 70.00th=[ 83], 80.00th=[ 94], 90.00th=[ 97], 95.00th=[ 105], 00:54:47.501 | 99.00th=[ 131], 99.50th=[ 132], 99.90th=[ 157], 99.95th=[ 157], 00:54:47.501 | 99.99th=[ 157] 00:54:47.501 bw ( KiB/s): min= 637, max= 1264, per=4.15%, avg=875.60, stdev=146.19, samples=20 00:54:47.501 iops : min= 159, max= 316, avg=218.85, stdev=36.57, samples=20 00:54:47.501 lat (msec) : 50=15.64%, 100=78.01%, 250=6.35% 00:54:47.501 cpu : usr=34.43%, sys=0.56%, ctx=959, majf=0, minf=9 00:54:47.501 IO depths : 1=2.2%, 2=4.9%, 4=13.9%, 8=67.8%, 16=11.2%, 32=0.0%, >=64=0.0% 00:54:47.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:47.501 complete : 0=0.0%, 4=91.2%, 8=4.0%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:47.501 issued rwts: total=2206,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:54:47.501 latency : target=0, window=0, percentile=100.00%, depth=16 00:54:47.501 filename1: (groupid=0, jobs=1): err= 0: pid=116147: Mon Jul 22 15:08:04 2024 00:54:47.501 read: IOPS=206, BW=824KiB/s (844kB/s)(8248KiB/10005msec) 00:54:47.501 slat (usec): min=6, max=8021, avg=33.28, stdev=279.91 00:54:47.501 clat (msec): min=6, max=171, avg=77.38, stdev=22.22 00:54:47.501 lat (msec): min=6, max=171, avg=77.41, stdev=22.23 00:54:47.501 clat percentiles (msec): 00:54:47.501 | 1.00th=[ 35], 5.00th=[ 46], 10.00th=[ 50], 20.00th=[ 61], 00:54:47.501 | 30.00th=[ 64], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 83], 00:54:47.501 | 70.00th=[ 88], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 118], 00:54:47.501 | 99.00th=[ 132], 99.50th=[ 136], 99.90th=[ 171], 99.95th=[ 171], 00:54:47.501 | 99.99th=[ 171] 00:54:47.501 bw ( KiB/s): min= 640, max= 1016, per=3.81%, avg=804.79, stdev=97.78, samples=19 00:54:47.501 iops : min= 160, max= 254, avg=201.16, stdev=24.44, samples=19 00:54:47.501 lat (msec) : 10=0.78%, 50=9.31%, 100=73.67%, 250=16.25% 00:54:47.501 cpu : usr=35.33%, sys=0.56%, ctx=936, majf=0, minf=9 00:54:47.501 IO depths : 1=1.8%, 2=4.6%, 4=14.3%, 8=68.3%, 16=11.0%, 32=0.0%, >=64=0.0% 00:54:47.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:47.501 complete : 0=0.0%, 4=91.0%, 8=3.7%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:47.501 issued rwts: total=2062,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:54:47.501 latency : target=0, window=0, percentile=100.00%, depth=16 00:54:47.501 filename1: (groupid=0, jobs=1): err= 0: pid=116148: Mon Jul 22 15:08:04 2024 00:54:47.501 read: IOPS=201, BW=805KiB/s (825kB/s)(8088KiB/10044msec) 00:54:47.501 slat (usec): min=3, max=8047, avg=24.12, stdev=260.95 00:54:47.501 clat (msec): min=35, max=144, avg=79.29, stdev=19.08 00:54:47.501 lat (msec): min=35, max=144, avg=79.32, stdev=19.08 00:54:47.501 clat percentiles (msec): 00:54:47.501 | 1.00th=[ 44], 5.00th=[ 58], 10.00th=[ 61], 20.00th=[ 64], 00:54:47.501 | 30.00th=[ 66], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 84], 00:54:47.501 | 70.00th=[ 89], 80.00th=[ 97], 90.00th=[ 106], 95.00th=[ 117], 00:54:47.501 | 99.00th=[ 131], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 146], 00:54:47.501 | 99.99th=[ 146] 00:54:47.501 bw ( KiB/s): min= 638, max= 1008, per=3.80%, avg=801.95, stdev=104.64, samples=20 00:54:47.501 iops : min= 159, max= 252, avg=200.45, stdev=26.21, samples=20 00:54:47.501 lat (msec) : 50=3.66%, 100=83.09%, 250=13.25% 00:54:47.501 cpu : usr=39.85%, sys=0.47%, ctx=1239, majf=0, minf=10 00:54:47.501 IO depths : 1=3.4%, 2=7.4%, 4=18.3%, 8=61.3%, 16=9.6%, 32=0.0%, >=64=0.0% 00:54:47.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:47.501 complete : 0=0.0%, 4=92.3%, 8=2.4%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:47.501 issued rwts: total=2022,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:54:47.501 latency : target=0, window=0, percentile=100.00%, depth=16 00:54:47.501 filename1: (groupid=0, jobs=1): err= 0: pid=116149: Mon Jul 22 15:08:04 2024 00:54:47.501 read: IOPS=202, BW=808KiB/s (828kB/s)(8088KiB/10007msec) 00:54:47.501 slat (usec): min=3, max=8043, avg=24.71, stdev=309.46 00:54:47.501 clat (msec): min=11, max=163, avg=78.86, stdev=23.34 00:54:47.501 lat (msec): min=11, max=163, avg=78.89, stdev=23.34 00:54:47.501 clat percentiles (msec): 00:54:47.501 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 56], 20.00th=[ 61], 00:54:47.501 | 30.00th=[ 66], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 84], 00:54:47.501 | 70.00th=[ 92], 80.00th=[ 99], 90.00th=[ 111], 95.00th=[ 121], 00:54:47.501 | 99.00th=[ 140], 99.50th=[ 140], 99.90th=[ 165], 99.95th=[ 165], 00:54:47.501 | 99.99th=[ 165] 00:54:47.501 bw ( KiB/s): min= 640, max= 944, per=3.78%, avg=797.21, stdev=104.22, samples=19 00:54:47.501 iops : min= 160, max= 236, avg=199.26, stdev=26.07, samples=19 00:54:47.501 lat (msec) : 20=0.79%, 50=6.53%, 100=75.47%, 250=17.21% 00:54:47.501 cpu : usr=40.27%, sys=0.60%, ctx=1152, majf=0, minf=9 00:54:47.501 IO depths : 1=3.6%, 2=7.9%, 4=18.9%, 8=60.5%, 16=9.1%, 32=0.0%, >=64=0.0% 00:54:47.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:47.501 complete : 0=0.0%, 4=92.5%, 8=2.0%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:47.501 issued rwts: total=2022,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:54:47.501 latency : target=0, window=0, percentile=100.00%, depth=16 00:54:47.501 filename2: (groupid=0, jobs=1): err= 0: pid=116150: Mon Jul 22 15:08:04 2024 00:54:47.501 read: IOPS=208, BW=834KiB/s (855kB/s)(8380KiB/10042msec) 00:54:47.501 slat (usec): min=4, max=8004, avg=23.94, stdev=219.02 00:54:47.501 clat (msec): min=36, max=169, avg=76.36, stdev=22.63 00:54:47.501 lat (msec): min=36, max=169, avg=76.38, stdev=22.63 00:54:47.501 clat percentiles (msec): 00:54:47.501 | 1.00th=[ 37], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 61], 00:54:47.501 | 30.00th=[ 64], 40.00th=[ 68], 50.00th=[ 71], 60.00th=[ 80], 00:54:47.501 | 70.00th=[ 86], 80.00th=[ 95], 90.00th=[ 104], 95.00th=[ 127], 00:54:47.501 | 99.00th=[ 138], 99.50th=[ 138], 99.90th=[ 169], 99.95th=[ 169], 00:54:47.501 | 99.99th=[ 169] 00:54:47.501 bw ( KiB/s): min= 638, max= 1200, per=3.96%, avg=834.90, stdev=146.57, samples=20 00:54:47.501 iops : min= 159, max= 300, avg=208.70, stdev=36.68, samples=20 00:54:47.501 lat (msec) : 50=11.31%, 100=76.61%, 250=12.08% 00:54:47.501 cpu : usr=43.04%, sys=0.62%, ctx=1473, majf=0, minf=9 00:54:47.501 IO depths : 1=2.8%, 2=6.2%, 4=16.0%, 8=65.0%, 16=9.9%, 32=0.0%, >=64=0.0% 00:54:47.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:47.501 complete : 0=0.0%, 4=91.6%, 8=3.0%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:47.501 issued rwts: total=2095,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:54:47.501 latency : target=0, window=0, percentile=100.00%, depth=16 00:54:47.501 filename2: (groupid=0, jobs=1): err= 0: pid=116151: Mon Jul 22 15:08:04 2024 00:54:47.501 read: IOPS=193, BW=774KiB/s (793kB/s)(7744KiB/10006msec) 00:54:47.501 slat (usec): min=3, max=9017, avg=16.13, stdev=204.73 00:54:47.501 clat (msec): min=11, max=177, avg=82.57, stdev=24.03 00:54:47.501 lat (msec): min=11, max=177, avg=82.59, stdev=24.03 00:54:47.501 clat percentiles (msec): 00:54:47.501 | 1.00th=[ 34], 5.00th=[ 49], 10.00th=[ 58], 20.00th=[ 63], 00:54:47.501 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 83], 60.00th=[ 86], 00:54:47.501 | 70.00th=[ 93], 80.00th=[ 100], 90.00th=[ 115], 95.00th=[ 128], 00:54:47.501 | 99.00th=[ 146], 99.50th=[ 159], 99.90th=[ 178], 99.95th=[ 178], 00:54:47.501 | 99.99th=[ 178] 00:54:47.501 bw ( KiB/s): min= 560, max= 970, per=3.64%, avg=767.68, stdev=117.36, samples=19 00:54:47.501 iops : min= 140, max= 242, avg=191.89, stdev=29.29, samples=19 00:54:47.501 lat (msec) : 20=0.83%, 50=4.96%, 100=74.33%, 250=19.89% 00:54:47.501 cpu : usr=32.58%, sys=0.51%, ctx=947, majf=0, minf=9 00:54:47.501 IO depths : 1=3.6%, 2=7.6%, 4=18.7%, 8=61.2%, 16=9.0%, 32=0.0%, >=64=0.0% 00:54:47.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:47.501 complete : 0=0.0%, 4=92.2%, 8=2.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:47.501 issued rwts: total=1936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:54:47.501 latency : target=0, window=0, percentile=100.00%, depth=16 00:54:47.501 filename2: (groupid=0, jobs=1): err= 0: pid=116152: Mon Jul 22 15:08:04 2024 00:54:47.501 read: IOPS=197, BW=789KiB/s (808kB/s)(7896KiB/10006msec) 00:54:47.501 slat (usec): min=5, max=8020, avg=22.74, stdev=269.15 00:54:47.501 clat (msec): min=11, max=166, avg=80.91, stdev=21.78 00:54:47.501 lat (msec): min=11, max=166, avg=80.93, stdev=21.79 00:54:47.501 clat percentiles (msec): 00:54:47.501 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 57], 20.00th=[ 65], 00:54:47.501 | 30.00th=[ 69], 40.00th=[ 73], 50.00th=[ 78], 60.00th=[ 84], 00:54:47.501 | 70.00th=[ 93], 80.00th=[ 97], 90.00th=[ 109], 95.00th=[ 122], 00:54:47.501 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 167], 99.95th=[ 167], 00:54:47.501 | 99.99th=[ 167] 00:54:47.501 bw ( KiB/s): min= 560, max= 1024, per=3.72%, avg=783.79, stdev=125.28, samples=19 00:54:47.501 iops : min= 140, max= 256, avg=195.89, stdev=31.36, samples=19 00:54:47.501 lat (msec) : 20=0.25%, 50=6.13%, 100=76.85%, 250=16.77% 00:54:47.501 cpu : usr=38.21%, sys=0.69%, ctx=1191, majf=0, minf=9 00:54:47.501 IO depths : 1=2.9%, 2=6.4%, 4=16.0%, 8=64.5%, 16=10.3%, 32=0.0%, >=64=0.0% 00:54:47.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:47.501 complete : 0=0.0%, 4=91.5%, 8=3.5%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:47.501 issued rwts: total=1974,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:54:47.501 latency : target=0, window=0, percentile=100.00%, depth=16 00:54:47.501 filename2: (groupid=0, jobs=1): err= 0: pid=116153: Mon Jul 22 15:08:04 2024 00:54:47.501 read: IOPS=261, BW=1047KiB/s (1072kB/s)(10.3MiB/10038msec) 00:54:47.501 slat (usec): min=4, max=5047, avg=18.83, stdev=172.27 00:54:47.501 clat (msec): min=22, max=151, avg=60.96, stdev=19.33 00:54:47.502 lat (msec): min=22, max=151, avg=60.98, stdev=19.33 00:54:47.502 clat percentiles (msec): 00:54:47.502 | 1.00th=[ 32], 5.00th=[ 37], 10.00th=[ 40], 20.00th=[ 44], 00:54:47.502 | 30.00th=[ 48], 40.00th=[ 54], 50.00th=[ 59], 60.00th=[ 64], 00:54:47.502 | 70.00th=[ 70], 80.00th=[ 75], 90.00th=[ 90], 95.00th=[ 95], 00:54:47.502 | 99.00th=[ 120], 99.50th=[ 128], 99.90th=[ 153], 99.95th=[ 153], 00:54:47.502 | 99.99th=[ 153] 00:54:47.502 bw ( KiB/s): min= 720, max= 1328, per=4.95%, avg=1044.75, stdev=180.24, samples=20 00:54:47.502 iops : min= 180, max= 332, avg=261.15, stdev=45.13, samples=20 00:54:47.502 lat (msec) : 50=36.72%, 100=59.67%, 250=3.61% 00:54:47.502 cpu : usr=44.88%, sys=0.74%, ctx=1458, majf=0, minf=9 00:54:47.502 IO depths : 1=0.8%, 2=1.8%, 4=9.3%, 8=75.3%, 16=12.7%, 32=0.0%, >=64=0.0% 00:54:47.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:47.502 complete : 0=0.0%, 4=89.9%, 8=5.5%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:47.502 issued rwts: total=2628,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:54:47.502 latency : target=0, window=0, percentile=100.00%, depth=16 00:54:47.502 filename2: (groupid=0, jobs=1): err= 0: pid=116154: Mon Jul 22 15:08:04 2024 00:54:47.502 read: IOPS=242, BW=972KiB/s (995kB/s)(9780KiB/10064msec) 00:54:47.502 slat (usec): min=5, max=8006, avg=14.38, stdev=161.76 00:54:47.502 clat (msec): min=11, max=155, avg=65.78, stdev=20.15 00:54:47.502 lat (msec): min=11, max=155, avg=65.79, stdev=20.15 00:54:47.502 clat percentiles (msec): 00:54:47.502 | 1.00th=[ 21], 5.00th=[ 39], 10.00th=[ 45], 20.00th=[ 48], 00:54:47.502 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 70], 00:54:47.502 | 70.00th=[ 72], 80.00th=[ 82], 90.00th=[ 94], 95.00th=[ 105], 00:54:47.502 | 99.00th=[ 121], 99.50th=[ 132], 99.90th=[ 157], 99.95th=[ 157], 00:54:47.502 | 99.99th=[ 157] 00:54:47.502 bw ( KiB/s): min= 614, max= 1248, per=4.60%, avg=970.90, stdev=155.77, samples=20 00:54:47.502 iops : min= 153, max= 312, avg=242.70, stdev=39.00, samples=20 00:54:47.502 lat (msec) : 20=0.65%, 50=24.21%, 100=69.16%, 250=5.97% 00:54:47.502 cpu : usr=34.22%, sys=0.46%, ctx=1085, majf=0, minf=9 00:54:47.502 IO depths : 1=0.4%, 2=1.3%, 4=8.3%, 8=76.9%, 16=13.1%, 32=0.0%, >=64=0.0% 00:54:47.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:47.502 complete : 0=0.0%, 4=89.6%, 8=5.9%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:47.502 issued rwts: total=2445,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:54:47.502 latency : target=0, window=0, percentile=100.00%, depth=16 00:54:47.502 filename2: (groupid=0, jobs=1): err= 0: pid=116155: Mon Jul 22 15:08:04 2024 00:54:47.502 read: IOPS=208, BW=835KiB/s (855kB/s)(8392KiB/10048msec) 00:54:47.502 slat (usec): min=4, max=8024, avg=15.15, stdev=175.03 00:54:47.502 clat (msec): min=24, max=138, avg=76.51, stdev=19.52 00:54:47.502 lat (msec): min=24, max=138, avg=76.52, stdev=19.53 00:54:47.502 clat percentiles (msec): 00:54:47.502 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 51], 20.00th=[ 62], 00:54:47.502 | 30.00th=[ 66], 40.00th=[ 71], 50.00th=[ 75], 60.00th=[ 82], 00:54:47.502 | 70.00th=[ 86], 80.00th=[ 93], 90.00th=[ 105], 95.00th=[ 112], 00:54:47.502 | 99.00th=[ 131], 99.50th=[ 134], 99.90th=[ 140], 99.95th=[ 140], 00:54:47.502 | 99.99th=[ 140] 00:54:47.502 bw ( KiB/s): min= 640, max= 1152, per=3.95%, avg=832.50, stdev=124.55, samples=20 00:54:47.502 iops : min= 160, max= 288, avg=208.10, stdev=31.17, samples=20 00:54:47.502 lat (msec) : 50=9.91%, 100=79.50%, 250=10.58% 00:54:47.502 cpu : usr=37.01%, sys=0.64%, ctx=1026, majf=0, minf=9 00:54:47.502 IO depths : 1=2.0%, 2=4.5%, 4=14.9%, 8=67.0%, 16=11.5%, 32=0.0%, >=64=0.0% 00:54:47.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:47.502 complete : 0=0.0%, 4=91.2%, 8=4.1%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:47.502 issued rwts: total=2098,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:54:47.502 latency : target=0, window=0, percentile=100.00%, depth=16 00:54:47.502 filename2: (groupid=0, jobs=1): err= 0: pid=116156: Mon Jul 22 15:08:04 2024 00:54:47.502 read: IOPS=216, BW=866KiB/s (887kB/s)(8700KiB/10047msec) 00:54:47.502 slat (usec): min=6, max=4055, avg=24.08, stdev=173.33 00:54:47.502 clat (msec): min=33, max=141, avg=73.71, stdev=21.72 00:54:47.502 lat (msec): min=33, max=141, avg=73.74, stdev=21.72 00:54:47.502 clat percentiles (msec): 00:54:47.502 | 1.00th=[ 36], 5.00th=[ 41], 10.00th=[ 47], 20.00th=[ 56], 00:54:47.502 | 30.00th=[ 63], 40.00th=[ 65], 50.00th=[ 70], 60.00th=[ 75], 00:54:47.502 | 70.00th=[ 84], 80.00th=[ 94], 90.00th=[ 103], 95.00th=[ 116], 00:54:47.502 | 99.00th=[ 127], 99.50th=[ 133], 99.90th=[ 134], 99.95th=[ 142], 00:54:47.502 | 99.99th=[ 142] 00:54:47.502 bw ( KiB/s): min= 592, max= 1152, per=4.09%, avg=863.20, stdev=138.07, samples=20 00:54:47.502 iops : min= 148, max= 288, avg=215.75, stdev=34.53, samples=20 00:54:47.502 lat (msec) : 50=15.17%, 100=72.14%, 250=12.69% 00:54:47.502 cpu : usr=46.02%, sys=0.71%, ctx=1664, majf=0, minf=9 00:54:47.502 IO depths : 1=3.4%, 2=7.1%, 4=17.2%, 8=62.9%, 16=9.4%, 32=0.0%, >=64=0.0% 00:54:47.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:47.502 complete : 0=0.0%, 4=91.9%, 8=2.6%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:47.502 issued rwts: total=2175,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:54:47.502 latency : target=0, window=0, percentile=100.00%, depth=16 00:54:47.502 filename2: (groupid=0, jobs=1): err= 0: pid=116157: Mon Jul 22 15:08:04 2024 00:54:47.502 read: IOPS=198, BW=795KiB/s (814kB/s)(7968KiB/10020msec) 00:54:47.502 slat (usec): min=6, max=10103, avg=23.72, stdev=259.43 00:54:47.502 clat (msec): min=35, max=167, avg=80.28, stdev=22.25 00:54:47.502 lat (msec): min=35, max=167, avg=80.30, stdev=22.26 00:54:47.502 clat percentiles (msec): 00:54:47.502 | 1.00th=[ 41], 5.00th=[ 47], 10.00th=[ 59], 20.00th=[ 63], 00:54:47.502 | 30.00th=[ 66], 40.00th=[ 70], 50.00th=[ 77], 60.00th=[ 85], 00:54:47.502 | 70.00th=[ 92], 80.00th=[ 97], 90.00th=[ 109], 95.00th=[ 121], 00:54:47.502 | 99.00th=[ 144], 99.50th=[ 163], 99.90th=[ 167], 99.95th=[ 167], 00:54:47.502 | 99.99th=[ 167] 00:54:47.502 bw ( KiB/s): min= 640, max= 1040, per=3.75%, avg=790.00, stdev=115.55, samples=20 00:54:47.502 iops : min= 160, max= 260, avg=197.50, stdev=28.89, samples=20 00:54:47.502 lat (msec) : 50=6.38%, 100=77.61%, 250=16.01% 00:54:47.502 cpu : usr=42.68%, sys=0.60%, ctx=1268, majf=0, minf=9 00:54:47.502 IO depths : 1=3.3%, 2=7.0%, 4=17.6%, 8=62.7%, 16=9.5%, 32=0.0%, >=64=0.0% 00:54:47.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:47.502 complete : 0=0.0%, 4=91.9%, 8=2.7%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:47.502 issued rwts: total=1992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:54:47.502 latency : target=0, window=0, percentile=100.00%, depth=16 00:54:47.502 00:54:47.502 Run status group 0 (all jobs): 00:54:47.502 READ: bw=20.6MiB/s (21.6MB/s), 774KiB/s-1047KiB/s (792kB/s-1072kB/s), io=207MiB (217MB), run=10005-10068msec 00:54:47.502 15:08:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:54:47.502 15:08:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:54:47.502 15:08:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:54:47.502 15:08:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:54:47.502 15:08:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:54:47.502 15:08:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:54:47.502 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:47.502 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:47.502 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:47.502 15:08:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:54:47.502 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:47.502 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:47.502 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:47.502 15:08:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:54:47.502 15:08:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:54:47.502 15:08:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:54:47.502 15:08:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:54:47.502 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:47.502 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:47.502 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:47.502 15:08:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:54:47.502 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:47.502 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:47.502 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:47.502 15:08:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:54:47.502 15:08:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:54:47.502 15:08:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:54:47.502 15:08:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:54:47.502 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:47.502 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:47.502 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:47.502 15:08:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:54:47.502 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:47.502 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:47.503 bdev_null0 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:47.503 [2024-07-22 15:08:05.305651] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:47.503 bdev_null1 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:54:47.503 { 00:54:47.503 "params": { 00:54:47.503 "name": "Nvme$subsystem", 00:54:47.503 "trtype": "$TEST_TRANSPORT", 00:54:47.503 "traddr": "$NVMF_FIRST_TARGET_IP", 00:54:47.503 "adrfam": "ipv4", 00:54:47.503 "trsvcid": "$NVMF_PORT", 00:54:47.503 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:54:47.503 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:54:47.503 "hdgst": ${hdgst:-false}, 00:54:47.503 "ddgst": ${ddgst:-false} 00:54:47.503 }, 00:54:47.503 "method": "bdev_nvme_attach_controller" 00:54:47.503 } 00:54:47.503 EOF 00:54:47.503 )") 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:54:47.503 { 00:54:47.503 "params": { 00:54:47.503 "name": "Nvme$subsystem", 00:54:47.503 "trtype": "$TEST_TRANSPORT", 00:54:47.503 "traddr": "$NVMF_FIRST_TARGET_IP", 00:54:47.503 "adrfam": "ipv4", 00:54:47.503 "trsvcid": "$NVMF_PORT", 00:54:47.503 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:54:47.503 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:54:47.503 "hdgst": ${hdgst:-false}, 00:54:47.503 "ddgst": ${ddgst:-false} 00:54:47.503 }, 00:54:47.503 "method": "bdev_nvme_attach_controller" 00:54:47.503 } 00:54:47.503 EOF 00:54:47.503 )") 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:54:47.503 15:08:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:54:47.503 "params": { 00:54:47.503 "name": "Nvme0", 00:54:47.503 "trtype": "tcp", 00:54:47.503 "traddr": "10.0.0.2", 00:54:47.503 "adrfam": "ipv4", 00:54:47.503 "trsvcid": "4420", 00:54:47.503 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:54:47.503 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:54:47.504 "hdgst": false, 00:54:47.504 "ddgst": false 00:54:47.504 }, 00:54:47.504 "method": "bdev_nvme_attach_controller" 00:54:47.504 },{ 00:54:47.504 "params": { 00:54:47.504 "name": "Nvme1", 00:54:47.504 "trtype": "tcp", 00:54:47.504 "traddr": "10.0.0.2", 00:54:47.504 "adrfam": "ipv4", 00:54:47.504 "trsvcid": "4420", 00:54:47.504 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:54:47.504 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:54:47.504 "hdgst": false, 00:54:47.504 "ddgst": false 00:54:47.504 }, 00:54:47.504 "method": "bdev_nvme_attach_controller" 00:54:47.504 }' 00:54:47.504 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:54:47.504 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:54:47.504 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:54:47.504 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:54:47.504 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:54:47.504 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:54:47.504 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:54:47.504 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:54:47.504 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:54:47.504 15:08:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:54:47.504 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:54:47.504 ... 00:54:47.504 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:54:47.504 ... 00:54:47.504 fio-3.35 00:54:47.504 Starting 4 threads 00:54:51.695 00:54:51.695 filename0: (groupid=0, jobs=1): err= 0: pid=116289: Mon Jul 22 15:08:11 2024 00:54:51.695 read: IOPS=2085, BW=16.3MiB/s (17.1MB/s)(81.5MiB/5001msec) 00:54:51.695 slat (nsec): min=6139, max=55276, avg=9317.54, stdev=4204.83 00:54:51.695 clat (usec): min=2596, max=5861, avg=3788.57, stdev=341.32 00:54:51.695 lat (usec): min=2604, max=5869, avg=3797.89, stdev=341.42 00:54:51.695 clat percentiles (usec): 00:54:51.695 | 1.00th=[ 2900], 5.00th=[ 3261], 10.00th=[ 3458], 20.00th=[ 3556], 00:54:51.695 | 30.00th=[ 3654], 40.00th=[ 3752], 50.00th=[ 3785], 60.00th=[ 3818], 00:54:51.695 | 70.00th=[ 3851], 80.00th=[ 3916], 90.00th=[ 4178], 95.00th=[ 4490], 00:54:51.695 | 99.00th=[ 4883], 99.50th=[ 4948], 99.90th=[ 5473], 99.95th=[ 5669], 00:54:51.695 | 99.99th=[ 5735] 00:54:51.695 bw ( KiB/s): min=14960, max=18176, per=25.02%, avg=16711.11, stdev=979.80, samples=9 00:54:51.695 iops : min= 1870, max= 2272, avg=2088.89, stdev=122.47, samples=9 00:54:51.695 lat (msec) : 4=85.24%, 10=14.76% 00:54:51.695 cpu : usr=95.94%, sys=2.98%, ctx=11, majf=0, minf=0 00:54:51.695 IO depths : 1=8.0%, 2=25.0%, 4=50.0%, 8=17.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:54:51.695 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:51.695 complete : 0=0.0%, 4=89.3%, 8=10.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:51.695 issued rwts: total=10432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:54:51.695 latency : target=0, window=0, percentile=100.00%, depth=8 00:54:51.695 filename0: (groupid=0, jobs=1): err= 0: pid=116290: Mon Jul 22 15:08:11 2024 00:54:51.695 read: IOPS=2085, BW=16.3MiB/s (17.1MB/s)(81.5MiB/5002msec) 00:54:51.695 slat (nsec): min=6209, max=50016, avg=13393.83, stdev=3618.53 00:54:51.695 clat (usec): min=1120, max=6449, avg=3774.74, stdev=351.03 00:54:51.695 lat (usec): min=1127, max=6461, avg=3788.14, stdev=351.16 00:54:51.695 clat percentiles (usec): 00:54:51.695 | 1.00th=[ 2868], 5.00th=[ 3228], 10.00th=[ 3458], 20.00th=[ 3556], 00:54:51.695 | 30.00th=[ 3654], 40.00th=[ 3720], 50.00th=[ 3785], 60.00th=[ 3818], 00:54:51.695 | 70.00th=[ 3851], 80.00th=[ 3916], 90.00th=[ 4178], 95.00th=[ 4490], 00:54:51.695 | 99.00th=[ 4817], 99.50th=[ 4948], 99.90th=[ 5407], 99.95th=[ 5538], 00:54:51.695 | 99.99th=[ 5997] 00:54:51.695 bw ( KiB/s): min=14864, max=18011, per=25.02%, avg=16707.00, stdev=986.09, samples=9 00:54:51.695 iops : min= 1858, max= 2251, avg=2088.33, stdev=123.20, samples=9 00:54:51.695 lat (msec) : 2=0.05%, 4=85.45%, 10=14.50% 00:54:51.695 cpu : usr=96.62%, sys=2.34%, ctx=3, majf=0, minf=0 00:54:51.695 IO depths : 1=8.1%, 2=24.9%, 4=50.1%, 8=16.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:54:51.695 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:51.695 complete : 0=0.0%, 4=89.3%, 8=10.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:51.695 issued rwts: total=10434,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:54:51.695 latency : target=0, window=0, percentile=100.00%, depth=8 00:54:51.695 filename1: (groupid=0, jobs=1): err= 0: pid=116291: Mon Jul 22 15:08:11 2024 00:54:51.695 read: IOPS=2090, BW=16.3MiB/s (17.1MB/s)(81.7MiB/5002msec) 00:54:51.695 slat (nsec): min=6120, max=49281, avg=9426.42, stdev=3249.25 00:54:51.695 clat (usec): min=1034, max=7195, avg=3788.56, stdev=487.37 00:54:51.695 lat (usec): min=1050, max=7203, avg=3797.99, stdev=487.07 00:54:51.695 clat percentiles (usec): 00:54:51.695 | 1.00th=[ 2073], 5.00th=[ 3130], 10.00th=[ 3326], 20.00th=[ 3556], 00:54:51.695 | 30.00th=[ 3687], 40.00th=[ 3752], 50.00th=[ 3785], 60.00th=[ 3818], 00:54:51.695 | 70.00th=[ 3851], 80.00th=[ 3949], 90.00th=[ 4228], 95.00th=[ 4490], 00:54:51.695 | 99.00th=[ 5538], 99.50th=[ 5997], 99.90th=[ 6521], 99.95th=[ 6915], 00:54:51.695 | 99.99th=[ 7177] 00:54:51.695 bw ( KiB/s): min=14864, max=18560, per=25.09%, avg=16758.33, stdev=1082.67, samples=9 00:54:51.695 iops : min= 1858, max= 2320, avg=2094.67, stdev=135.22, samples=9 00:54:51.695 lat (msec) : 2=0.55%, 4=81.72%, 10=17.73% 00:54:51.695 cpu : usr=96.34%, sys=2.68%, ctx=7, majf=0, minf=0 00:54:51.695 IO depths : 1=3.7%, 2=14.1%, 4=60.9%, 8=21.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:54:51.695 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:51.695 complete : 0=0.0%, 4=89.7%, 8=10.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:51.695 issued rwts: total=10456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:54:51.695 latency : target=0, window=0, percentile=100.00%, depth=8 00:54:51.695 filename1: (groupid=0, jobs=1): err= 0: pid=116292: Mon Jul 22 15:08:11 2024 00:54:51.695 read: IOPS=2085, BW=16.3MiB/s (17.1MB/s)(81.5MiB/5001msec) 00:54:51.695 slat (nsec): min=6147, max=63346, avg=13887.17, stdev=4517.43 00:54:51.695 clat (usec): min=1055, max=7674, avg=3770.09, stdev=448.71 00:54:51.695 lat (usec): min=1062, max=7684, avg=3783.98, stdev=448.73 00:54:51.695 clat percentiles (usec): 00:54:51.695 | 1.00th=[ 2606], 5.00th=[ 3130], 10.00th=[ 3425], 20.00th=[ 3556], 00:54:51.695 | 30.00th=[ 3621], 40.00th=[ 3720], 50.00th=[ 3752], 60.00th=[ 3785], 00:54:51.695 | 70.00th=[ 3851], 80.00th=[ 3916], 90.00th=[ 4178], 95.00th=[ 4555], 00:54:51.695 | 99.00th=[ 5538], 99.50th=[ 5932], 99.90th=[ 6587], 99.95th=[ 6849], 00:54:51.695 | 99.99th=[ 7242] 00:54:51.695 bw ( KiB/s): min=14864, max=17955, per=25.01%, avg=16700.78, stdev=976.97, samples=9 00:54:51.695 iops : min= 1858, max= 2244, avg=2087.56, stdev=122.06, samples=9 00:54:51.695 lat (msec) : 2=0.14%, 4=85.61%, 10=14.24% 00:54:51.695 cpu : usr=95.66%, sys=3.24%, ctx=26, majf=0, minf=0 00:54:51.695 IO depths : 1=6.8%, 2=25.0%, 4=50.0%, 8=18.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:54:51.695 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:51.695 complete : 0=0.0%, 4=89.4%, 8=10.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:54:51.695 issued rwts: total=10432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:54:51.695 latency : target=0, window=0, percentile=100.00%, depth=8 00:54:51.695 00:54:51.695 Run status group 0 (all jobs): 00:54:51.695 READ: bw=65.2MiB/s (68.4MB/s), 16.3MiB/s-16.3MiB/s (17.1MB/s-17.1MB/s), io=326MiB (342MB), run=5001-5002msec 00:54:51.955 15:08:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:54:51.955 15:08:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:54:51.955 15:08:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:54:51.955 15:08:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:54:51.955 15:08:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:54:51.955 15:08:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:54:51.955 15:08:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:51.955 15:08:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:51.955 15:08:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:51.955 15:08:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:54:51.955 15:08:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:51.955 15:08:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:51.955 15:08:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:51.955 15:08:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:54:51.955 15:08:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:54:51.955 15:08:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:54:51.955 15:08:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:54:51.955 15:08:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:51.955 15:08:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:51.955 15:08:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:51.955 15:08:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:54:51.955 15:08:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:51.955 15:08:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:51.955 15:08:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:51.955 00:54:51.955 real 0m23.636s 00:54:51.955 user 2m8.359s 00:54:51.955 sys 0m3.367s 00:54:51.955 ************************************ 00:54:51.955 END TEST fio_dif_rand_params 00:54:51.955 ************************************ 00:54:51.955 15:08:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # xtrace_disable 00:54:51.955 15:08:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:54:51.955 15:08:11 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:54:51.955 15:08:11 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:54:51.955 15:08:11 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:54:51.955 15:08:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:54:51.955 ************************************ 00:54:51.955 START TEST fio_dif_digest 00:54:51.955 ************************************ 00:54:51.955 15:08:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:54:51.955 15:08:11 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:54:51.955 15:08:11 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:54:51.955 15:08:11 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:54:51.955 15:08:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:54:51.955 15:08:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:54:51.955 15:08:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:54:51.955 15:08:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:54:51.955 15:08:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:54:51.955 15:08:11 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:54:51.955 15:08:11 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:54:51.955 15:08:11 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:54:51.955 15:08:11 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:54:51.955 15:08:11 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:54:51.955 15:08:11 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:54:51.955 15:08:11 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:54:51.955 15:08:11 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:54:51.955 15:08:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:51.955 15:08:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:54:51.955 bdev_null0 00:54:51.955 15:08:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:51.955 15:08:11 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:54:51.955 15:08:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:51.955 15:08:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:54:51.955 15:08:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:51.955 15:08:11 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:54:51.955 15:08:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:51.955 15:08:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:54:51.955 15:08:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:51.955 15:08:11 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:54:51.955 15:08:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:54:51.955 15:08:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:54:51.955 [2024-07-22 15:08:11.548047] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:54:51.955 15:08:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:54:51.955 15:08:11 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:54:51.955 15:08:11 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:54:51.956 15:08:11 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:54:51.956 15:08:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:54:51.956 15:08:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:54:51.956 15:08:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:54:51.956 15:08:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:54:51.956 { 00:54:51.956 "params": { 00:54:51.956 "name": "Nvme$subsystem", 00:54:51.956 "trtype": "$TEST_TRANSPORT", 00:54:51.956 "traddr": "$NVMF_FIRST_TARGET_IP", 00:54:51.956 "adrfam": "ipv4", 00:54:51.956 "trsvcid": "$NVMF_PORT", 00:54:51.956 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:54:51.956 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:54:51.956 "hdgst": ${hdgst:-false}, 00:54:51.956 "ddgst": ${ddgst:-false} 00:54:51.956 }, 00:54:51.956 "method": "bdev_nvme_attach_controller" 00:54:51.956 } 00:54:51.956 EOF 00:54:51.956 )") 00:54:51.956 15:08:11 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:54:51.956 15:08:11 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:54:51.956 15:08:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:54:51.956 15:08:11 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:54:51.956 15:08:11 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:54:51.956 15:08:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:54:51.956 15:08:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:54:51.956 15:08:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:54:51.956 15:08:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local sanitizers 00:54:51.956 15:08:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:54:51.956 15:08:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # shift 00:54:51.956 15:08:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local asan_lib= 00:54:51.956 15:08:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:54:51.956 15:08:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:54:51.956 15:08:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:54:51.956 15:08:11 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:54:51.956 15:08:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:54:51.956 15:08:11 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:54:51.956 15:08:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libasan 00:54:51.956 15:08:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:54:51.956 15:08:11 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:54:51.956 "params": { 00:54:51.956 "name": "Nvme0", 00:54:51.956 "trtype": "tcp", 00:54:51.956 "traddr": "10.0.0.2", 00:54:51.956 "adrfam": "ipv4", 00:54:51.956 "trsvcid": "4420", 00:54:51.956 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:54:51.956 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:54:51.956 "hdgst": true, 00:54:51.956 "ddgst": true 00:54:51.956 }, 00:54:51.956 "method": "bdev_nvme_attach_controller" 00:54:51.956 }' 00:54:52.215 15:08:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:54:52.215 15:08:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:54:52.215 15:08:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:54:52.215 15:08:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:54:52.215 15:08:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:54:52.215 15:08:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:54:52.215 15:08:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:54:52.215 15:08:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:54:52.215 15:08:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:54:52.215 15:08:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:54:52.215 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:54:52.215 ... 00:54:52.215 fio-3.35 00:54:52.215 Starting 3 threads 00:55:04.424 00:55:04.424 filename0: (groupid=0, jobs=1): err= 0: pid=116393: Mon Jul 22 15:08:22 2024 00:55:04.424 read: IOPS=178, BW=22.3MiB/s (23.4MB/s)(223MiB/10004msec) 00:55:04.424 slat (nsec): min=6455, max=86853, avg=17816.42, stdev=7332.67 00:55:04.424 clat (usec): min=4304, max=33439, avg=16808.96, stdev=1975.41 00:55:04.424 lat (usec): min=4318, max=33455, avg=16826.78, stdev=1978.08 00:55:04.424 clat percentiles (usec): 00:55:04.424 | 1.00th=[ 9765], 5.00th=[14746], 10.00th=[15270], 20.00th=[15795], 00:55:04.424 | 30.00th=[16188], 40.00th=[16450], 50.00th=[16712], 60.00th=[17171], 00:55:04.424 | 70.00th=[17433], 80.00th=[17695], 90.00th=[18220], 95.00th=[19530], 00:55:04.424 | 99.00th=[22938], 99.50th=[23987], 99.90th=[32900], 99.95th=[33424], 00:55:04.424 | 99.99th=[33424] 00:55:04.424 bw ( KiB/s): min=18688, max=26112, per=27.66%, avg=22770.53, stdev=1790.93, samples=19 00:55:04.424 iops : min= 146, max= 204, avg=177.89, stdev=13.99, samples=19 00:55:04.424 lat (msec) : 10=1.29%, 20=94.28%, 50=4.43% 00:55:04.424 cpu : usr=95.32%, sys=3.49%, ctx=236, majf=0, minf=0 00:55:04.424 IO depths : 1=2.9%, 2=97.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:55:04.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:04.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:04.424 issued rwts: total=1783,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:55:04.424 latency : target=0, window=0, percentile=100.00%, depth=3 00:55:04.424 filename0: (groupid=0, jobs=1): err= 0: pid=116394: Mon Jul 22 15:08:22 2024 00:55:04.424 read: IOPS=217, BW=27.2MiB/s (28.5MB/s)(273MiB/10046msec) 00:55:04.424 slat (nsec): min=6421, max=58767, avg=16369.54, stdev=6246.34 00:55:04.424 clat (usec): min=5602, max=55925, avg=13744.01, stdev=2573.79 00:55:04.424 lat (usec): min=5613, max=55940, avg=13760.38, stdev=2574.75 00:55:04.424 clat percentiles (usec): 00:55:04.424 | 1.00th=[ 7898], 5.00th=[11207], 10.00th=[11863], 20.00th=[12518], 00:55:04.424 | 30.00th=[13042], 40.00th=[13304], 50.00th=[13698], 60.00th=[13960], 00:55:04.424 | 70.00th=[14353], 80.00th=[14746], 90.00th=[15401], 95.00th=[16057], 00:55:04.424 | 99.00th=[17957], 99.50th=[27919], 99.90th=[54789], 99.95th=[55837], 00:55:04.424 | 99.99th=[55837] 00:55:04.424 bw ( KiB/s): min=23808, max=33024, per=33.86%, avg=27877.05, stdev=2435.96, samples=19 00:55:04.424 iops : min= 186, max= 258, avg=217.79, stdev=19.03, samples=19 00:55:04.424 lat (msec) : 10=1.65%, 20=97.85%, 50=0.37%, 100=0.14% 00:55:04.424 cpu : usr=95.27%, sys=3.51%, ctx=64, majf=0, minf=0 00:55:04.424 IO depths : 1=1.7%, 2=98.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:55:04.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:04.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:04.424 issued rwts: total=2186,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:55:04.424 latency : target=0, window=0, percentile=100.00%, depth=3 00:55:04.424 filename0: (groupid=0, jobs=1): err= 0: pid=116395: Mon Jul 22 15:08:22 2024 00:55:04.424 read: IOPS=249, BW=31.1MiB/s (32.6MB/s)(312MiB/10008msec) 00:55:04.424 slat (nsec): min=6541, max=52338, avg=16320.59, stdev=5030.76 00:55:04.424 clat (usec): min=7502, max=53120, avg=12025.86, stdev=2404.12 00:55:04.424 lat (usec): min=7518, max=53135, avg=12042.18, stdev=2404.47 00:55:04.424 clat percentiles (usec): 00:55:04.424 | 1.00th=[ 9241], 5.00th=[10159], 10.00th=[10552], 20.00th=[10945], 00:55:04.424 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11863], 60.00th=[12125], 00:55:04.424 | 70.00th=[12387], 80.00th=[12649], 90.00th=[13173], 95.00th=[14222], 00:55:04.424 | 99.00th=[16188], 99.50th=[20055], 99.90th=[51643], 99.95th=[52691], 00:55:04.424 | 99.99th=[53216] 00:55:04.424 bw ( KiB/s): min=26624, max=35072, per=38.64%, avg=31811.37, stdev=1841.78, samples=19 00:55:04.424 iops : min= 208, max= 274, avg=248.53, stdev=14.39, samples=19 00:55:04.424 lat (msec) : 10=3.89%, 20=95.59%, 50=0.28%, 100=0.24% 00:55:04.424 cpu : usr=94.76%, sys=4.00%, ctx=9, majf=0, minf=0 00:55:04.424 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:55:04.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:04.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:55:04.424 issued rwts: total=2492,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:55:04.424 latency : target=0, window=0, percentile=100.00%, depth=3 00:55:04.424 00:55:04.424 Run status group 0 (all jobs): 00:55:04.424 READ: bw=80.4MiB/s (84.3MB/s), 22.3MiB/s-31.1MiB/s (23.4MB/s-32.6MB/s), io=808MiB (847MB), run=10004-10046msec 00:55:04.424 15:08:22 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:55:04.424 15:08:22 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:55:04.424 15:08:22 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:55:04.424 15:08:22 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:55:04.424 15:08:22 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:55:04.424 15:08:22 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:55:04.424 15:08:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:55:04.424 15:08:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:55:04.424 15:08:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:55:04.424 15:08:22 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:55:04.424 15:08:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:55:04.424 15:08:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:55:04.424 ************************************ 00:55:04.424 END TEST fio_dif_digest 00:55:04.424 ************************************ 00:55:04.424 15:08:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:55:04.424 00:55:04.424 real 0m10.952s 00:55:04.424 user 0m29.203s 00:55:04.424 sys 0m1.378s 00:55:04.424 15:08:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:55:04.424 15:08:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:55:04.424 15:08:22 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:55:04.424 15:08:22 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:55:04.424 15:08:22 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:55:04.424 15:08:22 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:55:04.424 15:08:22 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:55:04.424 15:08:22 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:55:04.424 15:08:22 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:55:04.424 15:08:22 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:55:04.424 rmmod nvme_tcp 00:55:04.424 rmmod nvme_fabrics 00:55:04.424 rmmod nvme_keyring 00:55:04.424 15:08:22 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:55:04.424 15:08:22 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:55:04.424 15:08:22 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:55:04.424 15:08:22 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 115641 ']' 00:55:04.424 15:08:22 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 115641 00:55:04.424 15:08:22 nvmf_dif -- common/autotest_common.sh@946 -- # '[' -z 115641 ']' 00:55:04.424 15:08:22 nvmf_dif -- common/autotest_common.sh@950 -- # kill -0 115641 00:55:04.424 15:08:22 nvmf_dif -- common/autotest_common.sh@951 -- # uname 00:55:04.424 15:08:22 nvmf_dif -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:55:04.424 15:08:22 nvmf_dif -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 115641 00:55:04.424 killing process with pid 115641 00:55:04.424 15:08:22 nvmf_dif -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:55:04.424 15:08:22 nvmf_dif -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:55:04.424 15:08:22 nvmf_dif -- common/autotest_common.sh@964 -- # echo 'killing process with pid 115641' 00:55:04.424 15:08:22 nvmf_dif -- common/autotest_common.sh@965 -- # kill 115641 00:55:04.424 15:08:22 nvmf_dif -- common/autotest_common.sh@970 -- # wait 115641 00:55:04.425 15:08:22 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:55:04.425 15:08:22 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:55:04.425 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:55:04.425 Waiting for block devices as requested 00:55:04.425 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:55:04.425 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:55:04.425 15:08:23 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:55:04.425 15:08:23 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:55:04.425 15:08:23 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:55:04.425 15:08:23 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:55:04.425 15:08:23 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:55:04.425 15:08:23 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:55:04.425 15:08:23 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:55:04.425 15:08:23 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:55:04.425 00:55:04.425 real 1m0.023s 00:55:04.425 user 3m55.878s 00:55:04.425 sys 0m11.297s 00:55:04.425 15:08:23 nvmf_dif -- common/autotest_common.sh@1122 -- # xtrace_disable 00:55:04.425 15:08:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:55:04.425 ************************************ 00:55:04.425 END TEST nvmf_dif 00:55:04.425 ************************************ 00:55:04.425 15:08:23 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:55:04.425 15:08:23 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:55:04.425 15:08:23 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:55:04.425 15:08:23 -- common/autotest_common.sh@10 -- # set +x 00:55:04.425 ************************************ 00:55:04.425 START TEST nvmf_abort_qd_sizes 00:55:04.425 ************************************ 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:55:04.425 * Looking for test storage... 00:55:04.425 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:55:04.425 Cannot find device "nvmf_tgt_br" 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:55:04.425 Cannot find device "nvmf_tgt_br2" 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:55:04.425 Cannot find device "nvmf_tgt_br" 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:55:04.425 Cannot find device "nvmf_tgt_br2" 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:55:04.425 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:55:04.425 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:55:04.425 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:55:04.426 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:55:04.426 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:55:04.426 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:55:04.426 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:55:04.426 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:55:04.426 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:55:04.426 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:55:04.426 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:55:04.426 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:55:04.426 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:55:04.426 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:55:04.426 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:55:04.426 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:55:04.426 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:55:04.426 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:55:04.426 15:08:23 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:55:04.426 15:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:55:04.426 15:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:55:04.426 15:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:55:04.426 15:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:55:04.426 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:55:04.426 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:55:04.426 00:55:04.426 --- 10.0.0.2 ping statistics --- 00:55:04.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:04.426 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:55:04.426 15:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:55:04.426 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:55:04.426 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:55:04.426 00:55:04.426 --- 10.0.0.3 ping statistics --- 00:55:04.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:04.426 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:55:04.426 15:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:55:04.426 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:55:04.426 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:55:04.426 00:55:04.426 --- 10.0.0.1 ping statistics --- 00:55:04.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:55:04.426 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:55:04.426 15:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:55:04.426 15:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:55:04.426 15:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:55:04.426 15:08:24 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:55:05.363 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:55:05.363 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:55:05.363 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:55:05.621 15:08:25 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:55:05.621 15:08:25 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:55:05.621 15:08:25 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:55:05.621 15:08:25 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:55:05.621 15:08:25 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:55:05.621 15:08:25 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:55:05.621 15:08:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:55:05.621 15:08:25 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:55:05.621 15:08:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@720 -- # xtrace_disable 00:55:05.621 15:08:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:55:05.621 15:08:25 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:55:05.621 15:08:25 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=116980 00:55:05.621 15:08:25 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 116980 00:55:05.621 15:08:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # '[' -z 116980 ']' 00:55:05.621 15:08:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:55:05.621 15:08:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local max_retries=100 00:55:05.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:55:05.621 15:08:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:55:05.621 15:08:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # xtrace_disable 00:55:05.621 15:08:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:55:05.621 [2024-07-22 15:08:25.106459] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:55:05.622 [2024-07-22 15:08:25.106568] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:55:05.880 [2024-07-22 15:08:25.255350] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:55:05.880 [2024-07-22 15:08:25.316091] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:55:05.880 [2024-07-22 15:08:25.316147] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:55:05.880 [2024-07-22 15:08:25.316155] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:55:05.880 [2024-07-22 15:08:25.316160] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:55:05.880 [2024-07-22 15:08:25.316164] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:55:05.880 [2024-07-22 15:08:25.316304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:55:05.880 [2024-07-22 15:08:25.316732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:55:05.880 [2024-07-22 15:08:25.316769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:55:05.880 [2024-07-22 15:08:25.316767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:55:06.447 15:08:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:55:06.447 15:08:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # return 0 00:55:06.447 15:08:25 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:55:06.447 15:08:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:55:06.447 15:08:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:55:06.447 15:08:26 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:55:06.447 15:08:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:55:06.447 15:08:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:55:06.447 15:08:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:55:06.447 15:08:26 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:55:06.447 15:08:26 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:55:06.447 15:08:26 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:55:06.447 15:08:26 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:55:06.447 15:08:26 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:55:06.447 15:08:26 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:55:06.447 15:08:26 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:55:06.447 15:08:26 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:55:06.447 15:08:26 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:55:06.447 15:08:26 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:55:06.447 15:08:26 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:55:06.447 15:08:26 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:55:06.447 15:08:26 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:55:06.447 15:08:26 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:55:06.447 15:08:26 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:55:06.447 15:08:26 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:55:06.447 15:08:26 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:55:06.447 15:08:26 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:55:06.447 15:08:26 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:55:06.447 15:08:26 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:55:06.447 15:08:26 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:55:06.447 15:08:26 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:55:06.447 15:08:26 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:55:06.447 15:08:26 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:55:06.447 15:08:26 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:55:06.447 15:08:26 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:55:06.447 15:08:26 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:55:06.447 15:08:26 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:55:06.447 15:08:26 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:55:06.447 15:08:26 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:55:06.447 15:08:26 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:55:06.447 15:08:26 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:55:06.447 15:08:26 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:55:06.447 15:08:26 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:55:06.447 15:08:26 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:55:06.447 15:08:26 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:55:06.447 15:08:26 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:55:06.447 15:08:26 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:55:06.447 15:08:26 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:55:06.706 15:08:26 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:55:06.706 15:08:26 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:55:06.706 15:08:26 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:55:06.706 15:08:26 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:55:06.707 15:08:26 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:55:06.707 15:08:26 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:55:06.707 15:08:26 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:55:06.707 15:08:26 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:55:06.707 15:08:26 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:55:06.707 15:08:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:55:06.707 15:08:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:55:06.707 15:08:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:55:06.707 15:08:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:55:06.707 15:08:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:55:06.707 15:08:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:55:06.707 ************************************ 00:55:06.707 START TEST spdk_target_abort 00:55:06.707 ************************************ 00:55:06.707 15:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1121 -- # spdk_target 00:55:06.707 15:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:55:06.707 15:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:55:06.707 15:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:55:06.707 15:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:55:06.707 spdk_targetn1 00:55:06.707 15:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:55:06.707 15:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:55:06.707 15:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:55:06.707 15:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:55:06.707 [2024-07-22 15:08:26.180700] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:55:06.707 15:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:55:06.707 15:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:55:06.707 15:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:55:06.707 15:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:55:06.707 15:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:55:06.707 15:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:55:06.707 15:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:55:06.707 15:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:55:06.707 15:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:55:06.707 15:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:55:06.707 15:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:55:06.707 15:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:55:06.707 [2024-07-22 15:08:26.220828] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:55:06.707 15:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:55:06.707 15:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:55:06.707 15:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:55:06.707 15:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:55:06.707 15:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:55:06.707 15:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:55:06.707 15:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:55:06.707 15:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:55:06.707 15:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:55:06.707 15:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:55:06.707 15:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:55:06.707 15:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:55:06.707 15:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:55:06.707 15:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:55:06.707 15:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:55:06.707 15:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:55:06.707 15:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:55:06.707 15:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:55:06.707 15:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:55:06.707 15:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:55:06.707 15:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:55:06.707 15:08:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:55:09.999 Initializing NVMe Controllers 00:55:10.000 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:55:10.000 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:55:10.000 Initialization complete. Launching workers. 00:55:10.000 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12584, failed: 0 00:55:10.000 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1150, failed to submit 11434 00:55:10.000 success 738, unsuccess 412, failed 0 00:55:10.000 15:08:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:55:10.000 15:08:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:55:13.286 Initializing NVMe Controllers 00:55:13.286 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:55:13.286 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:55:13.286 Initialization complete. Launching workers. 00:55:13.286 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5974, failed: 0 00:55:13.286 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1253, failed to submit 4721 00:55:13.286 success 251, unsuccess 1002, failed 0 00:55:13.286 15:08:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:55:13.286 15:08:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:55:16.577 Initializing NVMe Controllers 00:55:16.577 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:55:16.577 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:55:16.577 Initialization complete. Launching workers. 00:55:16.577 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30129, failed: 0 00:55:16.577 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2735, failed to submit 27394 00:55:16.577 success 457, unsuccess 2278, failed 0 00:55:16.577 15:08:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:55:16.577 15:08:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:55:16.577 15:08:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:55:16.577 15:08:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:55:16.577 15:08:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:55:16.577 15:08:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:55:16.577 15:08:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:55:17.950 15:08:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:55:17.950 15:08:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 116980 00:55:17.950 15:08:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # '[' -z 116980 ']' 00:55:17.950 15:08:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # kill -0 116980 00:55:17.950 15:08:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # uname 00:55:17.950 15:08:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:55:17.950 15:08:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 116980 00:55:17.950 15:08:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:55:17.950 15:08:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:55:17.950 15:08:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 116980' 00:55:17.950 killing process with pid 116980 00:55:17.950 15:08:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # kill 116980 00:55:17.950 15:08:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # wait 116980 00:55:17.950 00:55:17.950 real 0m11.418s 00:55:17.950 user 0m45.605s 00:55:17.950 sys 0m1.450s 00:55:17.950 15:08:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:55:17.950 15:08:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:55:17.950 ************************************ 00:55:17.950 END TEST spdk_target_abort 00:55:17.950 ************************************ 00:55:17.950 15:08:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:55:17.950 15:08:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:55:17.950 15:08:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:55:17.950 15:08:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:55:17.950 ************************************ 00:55:17.950 START TEST kernel_target_abort 00:55:17.950 ************************************ 00:55:17.950 15:08:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1121 -- # kernel_target 00:55:17.950 15:08:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:55:17.950 15:08:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:55:17.950 15:08:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:55:17.950 15:08:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:55:18.209 15:08:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:55:18.209 15:08:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:55:18.209 15:08:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:55:18.209 15:08:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:55:18.209 15:08:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:55:18.209 15:08:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:55:18.209 15:08:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:55:18.209 15:08:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:55:18.209 15:08:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:55:18.209 15:08:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:55:18.209 15:08:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:55:18.209 15:08:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:55:18.209 15:08:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:55:18.209 15:08:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:55:18.209 15:08:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:55:18.209 15:08:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:55:18.209 15:08:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:55:18.209 15:08:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:55:18.468 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:55:18.727 Waiting for block devices as requested 00:55:18.727 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:55:18.727 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:55:18.727 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:55:18.727 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:55:18.727 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:55:18.727 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:55:18.727 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:55:18.727 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:55:18.727 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:55:18.727 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:55:18.727 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:55:18.727 No valid GPT data, bailing 00:55:18.727 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:55:18.986 No valid GPT data, bailing 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:55:18.986 No valid GPT data, bailing 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:55:18.986 No valid GPT data, bailing 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:55:18.986 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:55:19.246 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 --hostid=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 -a 10.0.0.1 -t tcp -s 4420 00:55:19.246 00:55:19.246 Discovery Log Number of Records 2, Generation counter 2 00:55:19.246 =====Discovery Log Entry 0====== 00:55:19.246 trtype: tcp 00:55:19.246 adrfam: ipv4 00:55:19.246 subtype: current discovery subsystem 00:55:19.246 treq: not specified, sq flow control disable supported 00:55:19.246 portid: 1 00:55:19.246 trsvcid: 4420 00:55:19.246 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:55:19.246 traddr: 10.0.0.1 00:55:19.246 eflags: none 00:55:19.246 sectype: none 00:55:19.246 =====Discovery Log Entry 1====== 00:55:19.246 trtype: tcp 00:55:19.246 adrfam: ipv4 00:55:19.246 subtype: nvme subsystem 00:55:19.246 treq: not specified, sq flow control disable supported 00:55:19.246 portid: 1 00:55:19.246 trsvcid: 4420 00:55:19.246 subnqn: nqn.2016-06.io.spdk:testnqn 00:55:19.246 traddr: 10.0.0.1 00:55:19.246 eflags: none 00:55:19.246 sectype: none 00:55:19.246 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:55:19.246 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:55:19.246 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:55:19.246 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:55:19.246 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:55:19.246 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:55:19.246 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:55:19.246 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:55:19.246 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:55:19.246 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:55:19.246 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:55:19.246 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:55:19.246 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:55:19.246 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:55:19.246 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:55:19.246 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:55:19.246 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:55:19.246 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:55:19.246 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:55:19.246 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:55:19.246 15:08:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:55:22.533 Initializing NVMe Controllers 00:55:22.533 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:55:22.533 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:55:22.533 Initialization complete. Launching workers. 00:55:22.533 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 39459, failed: 0 00:55:22.533 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 39459, failed to submit 0 00:55:22.533 success 0, unsuccess 39459, failed 0 00:55:22.533 15:08:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:55:22.533 15:08:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:55:25.824 Initializing NVMe Controllers 00:55:25.824 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:55:25.824 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:55:25.824 Initialization complete. Launching workers. 00:55:25.824 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 77556, failed: 0 00:55:25.824 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35761, failed to submit 41795 00:55:25.824 success 0, unsuccess 35761, failed 0 00:55:25.824 15:08:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:55:25.824 15:08:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:55:29.139 Initializing NVMe Controllers 00:55:29.139 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:55:29.139 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:55:29.139 Initialization complete. Launching workers. 00:55:29.139 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 98882, failed: 0 00:55:29.139 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24761, failed to submit 74121 00:55:29.139 success 0, unsuccess 24761, failed 0 00:55:29.139 15:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:55:29.139 15:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:55:29.139 15:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:55:29.139 15:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:55:29.139 15:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:55:29.139 15:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:55:29.139 15:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:55:29.139 15:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:55:29.139 15:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:55:29.139 15:08:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:55:29.710 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:55:33.907 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:55:33.907 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:55:33.907 00:55:33.907 real 0m15.217s 00:55:33.907 user 0m6.945s 00:55:33.907 sys 0m5.973s 00:55:33.907 15:08:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:55:33.907 15:08:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:55:33.907 ************************************ 00:55:33.907 END TEST kernel_target_abort 00:55:33.907 ************************************ 00:55:33.907 15:08:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:55:33.907 15:08:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:55:33.907 15:08:52 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:55:33.907 15:08:52 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:55:33.907 15:08:52 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:55:33.907 15:08:52 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:55:33.907 15:08:52 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:55:33.907 15:08:52 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:55:33.907 rmmod nvme_tcp 00:55:33.907 rmmod nvme_fabrics 00:55:33.907 rmmod nvme_keyring 00:55:33.907 15:08:52 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:55:33.907 15:08:52 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:55:33.907 15:08:52 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:55:33.907 15:08:52 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 116980 ']' 00:55:33.907 15:08:52 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 116980 00:55:33.907 15:08:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # '[' -z 116980 ']' 00:55:33.907 15:08:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # kill -0 116980 00:55:33.907 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (116980) - No such process 00:55:33.907 Process with pid 116980 is not found 00:55:33.907 15:08:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@973 -- # echo 'Process with pid 116980 is not found' 00:55:33.907 15:08:52 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:55:33.907 15:08:52 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:55:33.907 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:55:33.907 Waiting for block devices as requested 00:55:33.907 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:55:34.168 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:55:34.168 15:08:53 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:55:34.168 15:08:53 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:55:34.168 15:08:53 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:55:34.168 15:08:53 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:55:34.168 15:08:53 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:55:34.168 15:08:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:55:34.168 15:08:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:55:34.168 15:08:53 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:55:34.168 00:55:34.168 real 0m30.221s 00:55:34.168 user 0m53.752s 00:55:34.168 sys 0m9.112s 00:55:34.168 15:08:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:55:34.168 15:08:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:55:34.168 ************************************ 00:55:34.168 END TEST nvmf_abort_qd_sizes 00:55:34.168 ************************************ 00:55:34.428 15:08:53 -- spdk/autotest.sh@295 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:55:34.428 15:08:53 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:55:34.428 15:08:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:55:34.428 15:08:53 -- common/autotest_common.sh@10 -- # set +x 00:55:34.428 ************************************ 00:55:34.428 START TEST keyring_file 00:55:34.428 ************************************ 00:55:34.428 15:08:53 keyring_file -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:55:34.428 * Looking for test storage... 00:55:34.428 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:55:34.428 15:08:53 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:55:34.428 15:08:53 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:55:34.428 15:08:53 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:55:34.428 15:08:53 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:55:34.428 15:08:53 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:55:34.428 15:08:53 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:55:34.428 15:08:53 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:55:34.428 15:08:53 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:55:34.428 15:08:53 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:55:34.428 15:08:53 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:55:34.428 15:08:53 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:55:34.428 15:08:53 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:55:34.428 15:08:53 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:55:34.428 15:08:53 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:55:34.428 15:08:53 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:55:34.428 15:08:53 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:55:34.428 15:08:53 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:55:34.428 15:08:53 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:55:34.428 15:08:53 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:55:34.428 15:08:53 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:55:34.428 15:08:53 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:55:34.428 15:08:53 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:55:34.428 15:08:53 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:55:34.428 15:08:53 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:34.428 15:08:53 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:34.428 15:08:53 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:34.428 15:08:53 keyring_file -- paths/export.sh@5 -- # export PATH 00:55:34.428 15:08:53 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:34.428 15:08:53 keyring_file -- nvmf/common.sh@47 -- # : 0 00:55:34.428 15:08:53 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:55:34.428 15:08:53 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:55:34.428 15:08:53 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:55:34.428 15:08:53 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:55:34.428 15:08:53 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:55:34.428 15:08:53 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:55:34.428 15:08:53 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:55:34.428 15:08:53 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:55:34.428 15:08:54 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:55:34.428 15:08:54 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:55:34.428 15:08:54 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:55:34.428 15:08:54 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:55:34.428 15:08:54 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:55:34.428 15:08:54 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:55:34.428 15:08:54 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:55:34.428 15:08:54 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:55:34.428 15:08:54 keyring_file -- keyring/common.sh@17 -- # name=key0 00:55:34.428 15:08:54 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:55:34.428 15:08:54 keyring_file -- keyring/common.sh@17 -- # digest=0 00:55:34.428 15:08:54 keyring_file -- keyring/common.sh@18 -- # mktemp 00:55:34.428 15:08:54 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.SDYWKSR7L3 00:55:34.428 15:08:54 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:55:34.428 15:08:54 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:55:34.428 15:08:54 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:55:34.428 15:08:54 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:55:34.428 15:08:54 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:55:34.428 15:08:54 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:55:34.428 15:08:54 keyring_file -- nvmf/common.sh@705 -- # python - 00:55:34.688 15:08:54 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.SDYWKSR7L3 00:55:34.688 15:08:54 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.SDYWKSR7L3 00:55:34.688 15:08:54 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.SDYWKSR7L3 00:55:34.688 15:08:54 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:55:34.688 15:08:54 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:55:34.688 15:08:54 keyring_file -- keyring/common.sh@17 -- # name=key1 00:55:34.688 15:08:54 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:55:34.688 15:08:54 keyring_file -- keyring/common.sh@17 -- # digest=0 00:55:34.688 15:08:54 keyring_file -- keyring/common.sh@18 -- # mktemp 00:55:34.688 15:08:54 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.kbKvwqwKqa 00:55:34.688 15:08:54 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:55:34.688 15:08:54 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:55:34.688 15:08:54 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:55:34.688 15:08:54 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:55:34.688 15:08:54 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:55:34.688 15:08:54 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:55:34.688 15:08:54 keyring_file -- nvmf/common.sh@705 -- # python - 00:55:34.689 15:08:54 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.kbKvwqwKqa 00:55:34.689 15:08:54 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.kbKvwqwKqa 00:55:34.689 15:08:54 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.kbKvwqwKqa 00:55:34.689 15:08:54 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:55:34.689 15:08:54 keyring_file -- keyring/file.sh@30 -- # tgtpid=117882 00:55:34.689 15:08:54 keyring_file -- keyring/file.sh@32 -- # waitforlisten 117882 00:55:34.689 15:08:54 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 117882 ']' 00:55:34.689 15:08:54 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:55:34.689 15:08:54 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:55:34.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:55:34.689 15:08:54 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:55:34.689 15:08:54 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:55:34.689 15:08:54 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:55:34.689 [2024-07-22 15:08:54.165658] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:55:34.689 [2024-07-22 15:08:54.165747] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117882 ] 00:55:34.689 [2024-07-22 15:08:54.304462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:55:34.950 [2024-07-22 15:08:54.359249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:55:35.520 15:08:55 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:55:35.520 15:08:55 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:55:35.520 15:08:55 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:55:35.520 15:08:55 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:55:35.520 15:08:55 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:55:35.520 [2024-07-22 15:08:55.055854] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:55:35.520 null0 00:55:35.520 [2024-07-22 15:08:55.087763] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:55:35.520 [2024-07-22 15:08:55.087964] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:55:35.520 [2024-07-22 15:08:55.095752] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:55:35.520 15:08:55 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:55:35.520 15:08:55 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:55:35.520 15:08:55 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:55:35.520 15:08:55 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:55:35.520 15:08:55 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:55:35.520 15:08:55 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:55:35.520 15:08:55 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:55:35.520 15:08:55 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:55:35.520 15:08:55 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:55:35.520 15:08:55 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:55:35.520 15:08:55 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:55:35.520 [2024-07-22 15:08:55.111744] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:55:35.520 2024/07/22 15:08:55 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:55:35.520 request: 00:55:35.520 { 00:55:35.520 "method": "nvmf_subsystem_add_listener", 00:55:35.520 "params": { 00:55:35.520 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:55:35.520 "secure_channel": false, 00:55:35.520 "listen_address": { 00:55:35.520 "trtype": "tcp", 00:55:35.521 "traddr": "127.0.0.1", 00:55:35.521 "trsvcid": "4420" 00:55:35.521 } 00:55:35.521 } 00:55:35.521 } 00:55:35.521 Got JSON-RPC error response 00:55:35.521 GoRPCClient: error on JSON-RPC call 00:55:35.521 15:08:55 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:55:35.521 15:08:55 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:55:35.521 15:08:55 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:55:35.521 15:08:55 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:55:35.521 15:08:55 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:55:35.521 15:08:55 keyring_file -- keyring/file.sh@46 -- # bperfpid=117913 00:55:35.521 15:08:55 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:55:35.521 15:08:55 keyring_file -- keyring/file.sh@48 -- # waitforlisten 117913 /var/tmp/bperf.sock 00:55:35.521 15:08:55 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 117913 ']' 00:55:35.521 15:08:55 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:55:35.521 15:08:55 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:55:35.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:55:35.521 15:08:55 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:55:35.521 15:08:55 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:55:35.521 15:08:55 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:55:35.780 [2024-07-22 15:08:55.174429] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:55:35.780 [2024-07-22 15:08:55.174506] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117913 ] 00:55:35.780 [2024-07-22 15:08:55.312871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:55:35.780 [2024-07-22 15:08:55.368182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:55:36.718 15:08:56 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:55:36.718 15:08:56 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:55:36.718 15:08:56 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.SDYWKSR7L3 00:55:36.718 15:08:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.SDYWKSR7L3 00:55:36.718 15:08:56 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.kbKvwqwKqa 00:55:36.718 15:08:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.kbKvwqwKqa 00:55:36.978 15:08:56 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:55:36.978 15:08:56 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:55:36.978 15:08:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:55:36.978 15:08:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:55:36.978 15:08:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:55:37.238 15:08:56 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.SDYWKSR7L3 == \/\t\m\p\/\t\m\p\.\S\D\Y\W\K\S\R\7\L\3 ]] 00:55:37.238 15:08:56 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:55:37.238 15:08:56 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:55:37.238 15:08:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:55:37.238 15:08:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:55:37.238 15:08:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:55:37.499 15:08:57 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.kbKvwqwKqa == \/\t\m\p\/\t\m\p\.\k\b\K\v\w\q\w\K\q\a ]] 00:55:37.499 15:08:57 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:55:37.499 15:08:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:55:37.499 15:08:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:55:37.499 15:08:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:55:37.499 15:08:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:55:37.499 15:08:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:55:37.759 15:08:57 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:55:37.759 15:08:57 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:55:37.759 15:08:57 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:55:37.759 15:08:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:55:37.759 15:08:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:55:37.759 15:08:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:55:37.759 15:08:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:55:38.030 15:08:57 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:55:38.030 15:08:57 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:55:38.030 15:08:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:55:38.299 [2024-07-22 15:08:57.728951] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:55:38.299 nvme0n1 00:55:38.299 15:08:57 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:55:38.299 15:08:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:55:38.299 15:08:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:55:38.299 15:08:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:55:38.299 15:08:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:55:38.299 15:08:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:55:38.559 15:08:58 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:55:38.559 15:08:58 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:55:38.559 15:08:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:55:38.559 15:08:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:55:38.559 15:08:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:55:38.559 15:08:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:55:38.559 15:08:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:55:38.818 15:08:58 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:55:38.818 15:08:58 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:55:38.818 Running I/O for 1 seconds... 00:55:40.200 00:55:40.200 Latency(us) 00:55:40.200 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:55:40.200 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:55:40.200 nvme0n1 : 1.00 15179.69 59.30 0.00 0.00 8408.76 4521.70 20490.73 00:55:40.200 =================================================================================================================== 00:55:40.200 Total : 15179.69 59.30 0.00 0.00 8408.76 4521.70 20490.73 00:55:40.200 0 00:55:40.200 15:08:59 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:55:40.200 15:08:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:55:40.200 15:08:59 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:55:40.200 15:08:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:55:40.200 15:08:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:55:40.200 15:08:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:55:40.200 15:08:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:55:40.200 15:08:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:55:40.460 15:08:59 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:55:40.460 15:08:59 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:55:40.460 15:08:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:55:40.460 15:08:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:55:40.460 15:08:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:55:40.460 15:08:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:55:40.460 15:08:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:55:40.726 15:09:00 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:55:40.726 15:09:00 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:55:40.726 15:09:00 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:55:40.726 15:09:00 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:55:40.726 15:09:00 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:55:40.726 15:09:00 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:55:40.726 15:09:00 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:55:40.727 15:09:00 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:55:40.727 15:09:00 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:55:40.727 15:09:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:55:40.727 [2024-07-22 15:09:00.290979] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:55:40.727 [2024-07-22 15:09:00.291555] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d4f800 (107): Transport endpoint is not connected 00:55:40.727 [2024-07-22 15:09:00.292542] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d4f800 (9): Bad file descriptor 00:55:40.727 [2024-07-22 15:09:00.293539] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:55:40.727 [2024-07-22 15:09:00.293556] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:55:40.727 [2024-07-22 15:09:00.293565] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:55:40.727 2024/07/22 15:09:00 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:55:40.727 request: 00:55:40.727 { 00:55:40.727 "method": "bdev_nvme_attach_controller", 00:55:40.727 "params": { 00:55:40.727 "name": "nvme0", 00:55:40.727 "trtype": "tcp", 00:55:40.727 "traddr": "127.0.0.1", 00:55:40.727 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:55:40.727 "adrfam": "ipv4", 00:55:40.727 "trsvcid": "4420", 00:55:40.727 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:55:40.727 "psk": "key1" 00:55:40.727 } 00:55:40.727 } 00:55:40.727 Got JSON-RPC error response 00:55:40.727 GoRPCClient: error on JSON-RPC call 00:55:40.727 15:09:00 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:55:40.727 15:09:00 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:55:40.727 15:09:00 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:55:40.727 15:09:00 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:55:40.727 15:09:00 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:55:40.727 15:09:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:55:40.727 15:09:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:55:40.727 15:09:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:55:40.727 15:09:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:55:40.727 15:09:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:55:41.028 15:09:00 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:55:41.028 15:09:00 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:55:41.028 15:09:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:55:41.028 15:09:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:55:41.028 15:09:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:55:41.028 15:09:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:55:41.028 15:09:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:55:41.289 15:09:00 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:55:41.289 15:09:00 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:55:41.289 15:09:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:55:41.550 15:09:00 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:55:41.550 15:09:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:55:41.811 15:09:01 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:55:41.811 15:09:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:55:41.811 15:09:01 keyring_file -- keyring/file.sh@77 -- # jq length 00:55:42.072 15:09:01 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:55:42.072 15:09:01 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.SDYWKSR7L3 00:55:42.072 15:09:01 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.SDYWKSR7L3 00:55:42.072 15:09:01 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:55:42.072 15:09:01 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.SDYWKSR7L3 00:55:42.072 15:09:01 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:55:42.072 15:09:01 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:55:42.072 15:09:01 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:55:42.072 15:09:01 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:55:42.072 15:09:01 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.SDYWKSR7L3 00:55:42.072 15:09:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.SDYWKSR7L3 00:55:42.072 [2024-07-22 15:09:01.643458] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.SDYWKSR7L3': 0100660 00:55:42.072 [2024-07-22 15:09:01.643499] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:55:42.072 2024/07/22 15:09:01 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.SDYWKSR7L3], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:55:42.072 request: 00:55:42.072 { 00:55:42.072 "method": "keyring_file_add_key", 00:55:42.072 "params": { 00:55:42.072 "name": "key0", 00:55:42.072 "path": "/tmp/tmp.SDYWKSR7L3" 00:55:42.072 } 00:55:42.072 } 00:55:42.072 Got JSON-RPC error response 00:55:42.072 GoRPCClient: error on JSON-RPC call 00:55:42.072 15:09:01 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:55:42.072 15:09:01 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:55:42.072 15:09:01 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:55:42.072 15:09:01 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:55:42.072 15:09:01 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.SDYWKSR7L3 00:55:42.072 15:09:01 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.SDYWKSR7L3 00:55:42.072 15:09:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.SDYWKSR7L3 00:55:42.332 15:09:01 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.SDYWKSR7L3 00:55:42.332 15:09:01 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:55:42.332 15:09:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:55:42.332 15:09:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:55:42.332 15:09:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:55:42.332 15:09:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:55:42.332 15:09:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:55:42.592 15:09:02 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:55:42.592 15:09:02 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:55:42.592 15:09:02 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:55:42.592 15:09:02 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:55:42.592 15:09:02 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:55:42.592 15:09:02 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:55:42.592 15:09:02 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:55:42.592 15:09:02 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:55:42.592 15:09:02 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:55:42.592 15:09:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:55:42.851 [2024-07-22 15:09:02.350268] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.SDYWKSR7L3': No such file or directory 00:55:42.851 [2024-07-22 15:09:02.350316] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:55:42.851 [2024-07-22 15:09:02.350337] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:55:42.851 [2024-07-22 15:09:02.350344] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:55:42.851 [2024-07-22 15:09:02.350351] bdev_nvme.c:6269:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:55:42.851 2024/07/22 15:09:02 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:55:42.851 request: 00:55:42.851 { 00:55:42.851 "method": "bdev_nvme_attach_controller", 00:55:42.851 "params": { 00:55:42.851 "name": "nvme0", 00:55:42.851 "trtype": "tcp", 00:55:42.851 "traddr": "127.0.0.1", 00:55:42.851 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:55:42.851 "adrfam": "ipv4", 00:55:42.851 "trsvcid": "4420", 00:55:42.851 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:55:42.851 "psk": "key0" 00:55:42.851 } 00:55:42.851 } 00:55:42.851 Got JSON-RPC error response 00:55:42.851 GoRPCClient: error on JSON-RPC call 00:55:42.851 15:09:02 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:55:42.851 15:09:02 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:55:42.851 15:09:02 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:55:42.851 15:09:02 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:55:42.851 15:09:02 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:55:42.851 15:09:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:55:43.111 15:09:02 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:55:43.111 15:09:02 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:55:43.111 15:09:02 keyring_file -- keyring/common.sh@17 -- # name=key0 00:55:43.111 15:09:02 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:55:43.111 15:09:02 keyring_file -- keyring/common.sh@17 -- # digest=0 00:55:43.111 15:09:02 keyring_file -- keyring/common.sh@18 -- # mktemp 00:55:43.111 15:09:02 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.IELa8NmhSG 00:55:43.111 15:09:02 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:55:43.111 15:09:02 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:55:43.111 15:09:02 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:55:43.111 15:09:02 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:55:43.111 15:09:02 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:55:43.111 15:09:02 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:55:43.111 15:09:02 keyring_file -- nvmf/common.sh@705 -- # python - 00:55:43.111 15:09:02 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.IELa8NmhSG 00:55:43.111 15:09:02 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.IELa8NmhSG 00:55:43.111 15:09:02 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.IELa8NmhSG 00:55:43.111 15:09:02 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.IELa8NmhSG 00:55:43.111 15:09:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.IELa8NmhSG 00:55:43.371 15:09:02 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:55:43.371 15:09:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:55:43.635 nvme0n1 00:55:43.900 15:09:03 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:55:43.900 15:09:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:55:43.900 15:09:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:55:43.900 15:09:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:55:43.900 15:09:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:55:43.900 15:09:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:55:43.900 15:09:03 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:55:43.900 15:09:03 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:55:43.900 15:09:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:55:44.158 15:09:03 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:55:44.158 15:09:03 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:55:44.158 15:09:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:55:44.158 15:09:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:55:44.158 15:09:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:55:44.417 15:09:03 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:55:44.417 15:09:03 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:55:44.417 15:09:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:55:44.417 15:09:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:55:44.417 15:09:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:55:44.417 15:09:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:55:44.417 15:09:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:55:44.675 15:09:04 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:55:44.675 15:09:04 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:55:44.675 15:09:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:55:44.933 15:09:04 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:55:44.933 15:09:04 keyring_file -- keyring/file.sh@104 -- # jq length 00:55:44.933 15:09:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:55:45.192 15:09:04 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:55:45.192 15:09:04 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.IELa8NmhSG 00:55:45.192 15:09:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.IELa8NmhSG 00:55:45.452 15:09:04 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.kbKvwqwKqa 00:55:45.452 15:09:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.kbKvwqwKqa 00:55:45.711 15:09:05 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:55:45.711 15:09:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:55:45.970 nvme0n1 00:55:45.970 15:09:05 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:55:45.970 15:09:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:55:46.229 15:09:05 keyring_file -- keyring/file.sh@112 -- # config='{ 00:55:46.229 "subsystems": [ 00:55:46.229 { 00:55:46.229 "subsystem": "keyring", 00:55:46.229 "config": [ 00:55:46.229 { 00:55:46.229 "method": "keyring_file_add_key", 00:55:46.229 "params": { 00:55:46.229 "name": "key0", 00:55:46.229 "path": "/tmp/tmp.IELa8NmhSG" 00:55:46.229 } 00:55:46.229 }, 00:55:46.229 { 00:55:46.229 "method": "keyring_file_add_key", 00:55:46.229 "params": { 00:55:46.229 "name": "key1", 00:55:46.229 "path": "/tmp/tmp.kbKvwqwKqa" 00:55:46.229 } 00:55:46.229 } 00:55:46.229 ] 00:55:46.229 }, 00:55:46.229 { 00:55:46.229 "subsystem": "iobuf", 00:55:46.229 "config": [ 00:55:46.229 { 00:55:46.229 "method": "iobuf_set_options", 00:55:46.229 "params": { 00:55:46.229 "large_bufsize": 135168, 00:55:46.229 "large_pool_count": 1024, 00:55:46.229 "small_bufsize": 8192, 00:55:46.229 "small_pool_count": 8192 00:55:46.229 } 00:55:46.229 } 00:55:46.229 ] 00:55:46.229 }, 00:55:46.229 { 00:55:46.229 "subsystem": "sock", 00:55:46.229 "config": [ 00:55:46.229 { 00:55:46.229 "method": "sock_set_default_impl", 00:55:46.229 "params": { 00:55:46.229 "impl_name": "posix" 00:55:46.229 } 00:55:46.229 }, 00:55:46.229 { 00:55:46.229 "method": "sock_impl_set_options", 00:55:46.229 "params": { 00:55:46.229 "enable_ktls": false, 00:55:46.229 "enable_placement_id": 0, 00:55:46.229 "enable_quickack": false, 00:55:46.229 "enable_recv_pipe": true, 00:55:46.229 "enable_zerocopy_send_client": false, 00:55:46.229 "enable_zerocopy_send_server": true, 00:55:46.229 "impl_name": "ssl", 00:55:46.229 "recv_buf_size": 4096, 00:55:46.229 "send_buf_size": 4096, 00:55:46.229 "tls_version": 0, 00:55:46.229 "zerocopy_threshold": 0 00:55:46.229 } 00:55:46.229 }, 00:55:46.229 { 00:55:46.229 "method": "sock_impl_set_options", 00:55:46.229 "params": { 00:55:46.229 "enable_ktls": false, 00:55:46.229 "enable_placement_id": 0, 00:55:46.229 "enable_quickack": false, 00:55:46.229 "enable_recv_pipe": true, 00:55:46.229 "enable_zerocopy_send_client": false, 00:55:46.229 "enable_zerocopy_send_server": true, 00:55:46.229 "impl_name": "posix", 00:55:46.229 "recv_buf_size": 2097152, 00:55:46.229 "send_buf_size": 2097152, 00:55:46.229 "tls_version": 0, 00:55:46.229 "zerocopy_threshold": 0 00:55:46.229 } 00:55:46.229 } 00:55:46.229 ] 00:55:46.229 }, 00:55:46.229 { 00:55:46.229 "subsystem": "vmd", 00:55:46.229 "config": [] 00:55:46.229 }, 00:55:46.229 { 00:55:46.229 "subsystem": "accel", 00:55:46.229 "config": [ 00:55:46.229 { 00:55:46.229 "method": "accel_set_options", 00:55:46.229 "params": { 00:55:46.229 "buf_count": 2048, 00:55:46.229 "large_cache_size": 16, 00:55:46.229 "sequence_count": 2048, 00:55:46.229 "small_cache_size": 128, 00:55:46.229 "task_count": 2048 00:55:46.229 } 00:55:46.229 } 00:55:46.229 ] 00:55:46.229 }, 00:55:46.229 { 00:55:46.229 "subsystem": "bdev", 00:55:46.229 "config": [ 00:55:46.229 { 00:55:46.229 "method": "bdev_set_options", 00:55:46.229 "params": { 00:55:46.229 "bdev_auto_examine": true, 00:55:46.229 "bdev_io_cache_size": 256, 00:55:46.229 "bdev_io_pool_size": 65535, 00:55:46.229 "iobuf_large_cache_size": 16, 00:55:46.229 "iobuf_small_cache_size": 128 00:55:46.229 } 00:55:46.229 }, 00:55:46.229 { 00:55:46.229 "method": "bdev_raid_set_options", 00:55:46.229 "params": { 00:55:46.229 "process_window_size_kb": 1024 00:55:46.229 } 00:55:46.229 }, 00:55:46.229 { 00:55:46.229 "method": "bdev_iscsi_set_options", 00:55:46.229 "params": { 00:55:46.229 "timeout_sec": 30 00:55:46.229 } 00:55:46.229 }, 00:55:46.229 { 00:55:46.229 "method": "bdev_nvme_set_options", 00:55:46.229 "params": { 00:55:46.229 "action_on_timeout": "none", 00:55:46.229 "allow_accel_sequence": false, 00:55:46.229 "arbitration_burst": 0, 00:55:46.229 "bdev_retry_count": 3, 00:55:46.229 "ctrlr_loss_timeout_sec": 0, 00:55:46.229 "delay_cmd_submit": true, 00:55:46.229 "dhchap_dhgroups": [ 00:55:46.229 "null", 00:55:46.229 "ffdhe2048", 00:55:46.229 "ffdhe3072", 00:55:46.229 "ffdhe4096", 00:55:46.229 "ffdhe6144", 00:55:46.229 "ffdhe8192" 00:55:46.229 ], 00:55:46.229 "dhchap_digests": [ 00:55:46.229 "sha256", 00:55:46.229 "sha384", 00:55:46.229 "sha512" 00:55:46.230 ], 00:55:46.230 "disable_auto_failback": false, 00:55:46.230 "fast_io_fail_timeout_sec": 0, 00:55:46.230 "generate_uuids": false, 00:55:46.230 "high_priority_weight": 0, 00:55:46.230 "io_path_stat": false, 00:55:46.230 "io_queue_requests": 512, 00:55:46.230 "keep_alive_timeout_ms": 10000, 00:55:46.230 "low_priority_weight": 0, 00:55:46.230 "medium_priority_weight": 0, 00:55:46.230 "nvme_adminq_poll_period_us": 10000, 00:55:46.230 "nvme_error_stat": false, 00:55:46.230 "nvme_ioq_poll_period_us": 0, 00:55:46.230 "rdma_cm_event_timeout_ms": 0, 00:55:46.230 "rdma_max_cq_size": 0, 00:55:46.230 "rdma_srq_size": 0, 00:55:46.230 "reconnect_delay_sec": 0, 00:55:46.230 "timeout_admin_us": 0, 00:55:46.230 "timeout_us": 0, 00:55:46.230 "transport_ack_timeout": 0, 00:55:46.230 "transport_retry_count": 4, 00:55:46.230 "transport_tos": 0 00:55:46.230 } 00:55:46.230 }, 00:55:46.230 { 00:55:46.230 "method": "bdev_nvme_attach_controller", 00:55:46.230 "params": { 00:55:46.230 "adrfam": "IPv4", 00:55:46.230 "ctrlr_loss_timeout_sec": 0, 00:55:46.230 "ddgst": false, 00:55:46.230 "fast_io_fail_timeout_sec": 0, 00:55:46.230 "hdgst": false, 00:55:46.230 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:55:46.230 "name": "nvme0", 00:55:46.230 "prchk_guard": false, 00:55:46.230 "prchk_reftag": false, 00:55:46.230 "psk": "key0", 00:55:46.230 "reconnect_delay_sec": 0, 00:55:46.230 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:55:46.230 "traddr": "127.0.0.1", 00:55:46.230 "trsvcid": "4420", 00:55:46.230 "trtype": "TCP" 00:55:46.230 } 00:55:46.230 }, 00:55:46.230 { 00:55:46.230 "method": "bdev_nvme_set_hotplug", 00:55:46.230 "params": { 00:55:46.230 "enable": false, 00:55:46.230 "period_us": 100000 00:55:46.230 } 00:55:46.230 }, 00:55:46.230 { 00:55:46.230 "method": "bdev_wait_for_examine" 00:55:46.230 } 00:55:46.230 ] 00:55:46.230 }, 00:55:46.230 { 00:55:46.230 "subsystem": "nbd", 00:55:46.230 "config": [] 00:55:46.230 } 00:55:46.230 ] 00:55:46.230 }' 00:55:46.230 15:09:05 keyring_file -- keyring/file.sh@114 -- # killprocess 117913 00:55:46.230 15:09:05 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 117913 ']' 00:55:46.230 15:09:05 keyring_file -- common/autotest_common.sh@950 -- # kill -0 117913 00:55:46.230 15:09:05 keyring_file -- common/autotest_common.sh@951 -- # uname 00:55:46.230 15:09:05 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:55:46.230 15:09:05 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 117913 00:55:46.230 15:09:05 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:55:46.230 15:09:05 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:55:46.230 killing process with pid 117913 00:55:46.230 15:09:05 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 117913' 00:55:46.230 15:09:05 keyring_file -- common/autotest_common.sh@965 -- # kill 117913 00:55:46.230 Received shutdown signal, test time was about 1.000000 seconds 00:55:46.230 00:55:46.230 Latency(us) 00:55:46.230 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:55:46.230 =================================================================================================================== 00:55:46.230 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:55:46.230 15:09:05 keyring_file -- common/autotest_common.sh@970 -- # wait 117913 00:55:46.490 15:09:05 keyring_file -- keyring/file.sh@117 -- # bperfpid=118379 00:55:46.490 15:09:05 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:55:46.490 15:09:05 keyring_file -- keyring/file.sh@119 -- # waitforlisten 118379 /var/tmp/bperf.sock 00:55:46.490 15:09:05 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 118379 ']' 00:55:46.490 15:09:05 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:55:46.490 15:09:05 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:55:46.490 15:09:05 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:55:46.490 "subsystems": [ 00:55:46.490 { 00:55:46.490 "subsystem": "keyring", 00:55:46.490 "config": [ 00:55:46.490 { 00:55:46.490 "method": "keyring_file_add_key", 00:55:46.490 "params": { 00:55:46.490 "name": "key0", 00:55:46.490 "path": "/tmp/tmp.IELa8NmhSG" 00:55:46.490 } 00:55:46.490 }, 00:55:46.490 { 00:55:46.490 "method": "keyring_file_add_key", 00:55:46.490 "params": { 00:55:46.490 "name": "key1", 00:55:46.490 "path": "/tmp/tmp.kbKvwqwKqa" 00:55:46.490 } 00:55:46.490 } 00:55:46.490 ] 00:55:46.490 }, 00:55:46.490 { 00:55:46.490 "subsystem": "iobuf", 00:55:46.490 "config": [ 00:55:46.490 { 00:55:46.490 "method": "iobuf_set_options", 00:55:46.490 "params": { 00:55:46.490 "large_bufsize": 135168, 00:55:46.490 "large_pool_count": 1024, 00:55:46.490 "small_bufsize": 8192, 00:55:46.490 "small_pool_count": 8192 00:55:46.490 } 00:55:46.490 } 00:55:46.490 ] 00:55:46.490 }, 00:55:46.490 { 00:55:46.490 "subsystem": "sock", 00:55:46.490 "config": [ 00:55:46.490 { 00:55:46.490 "method": "sock_set_default_impl", 00:55:46.490 "params": { 00:55:46.490 "impl_name": "posix" 00:55:46.490 } 00:55:46.490 }, 00:55:46.490 { 00:55:46.490 "method": "sock_impl_set_options", 00:55:46.490 "params": { 00:55:46.490 "enable_ktls": false, 00:55:46.490 "enable_placement_id": 0, 00:55:46.490 "enable_quickack": false, 00:55:46.490 "enable_recv_pipe": true, 00:55:46.490 "enable_zerocopy_send_client": false, 00:55:46.490 "enable_zerocopy_send_server": true, 00:55:46.490 "impl_name": "ssl", 00:55:46.490 "recv_buf_size": 4096, 00:55:46.490 "send_buf_size": 4096, 00:55:46.490 "tls_version": 0, 00:55:46.490 "zerocopy_threshold": 0 00:55:46.490 } 00:55:46.490 }, 00:55:46.490 { 00:55:46.490 "method": "sock_impl_set_options", 00:55:46.490 "params": { 00:55:46.490 "enable_ktls": false, 00:55:46.490 "enable_placement_id": 0, 00:55:46.490 "enable_quickack": false, 00:55:46.490 "enable_recv_pipe": true, 00:55:46.490 "enable_zerocopy_send_client": false, 00:55:46.490 "enable_zerocopy_send_server": true, 00:55:46.490 "impl_name": "posix", 00:55:46.490 "recv_buf_size": 2097152, 00:55:46.490 "send_buf_size": 2097152, 00:55:46.490 "tls_version": 0, 00:55:46.490 "zerocopy_threshold": 0 00:55:46.490 } 00:55:46.490 } 00:55:46.490 ] 00:55:46.490 }, 00:55:46.490 { 00:55:46.490 "subsystem": "vmd", 00:55:46.490 "config": [] 00:55:46.490 }, 00:55:46.490 { 00:55:46.490 "subsystem": "accel", 00:55:46.490 "config": [ 00:55:46.490 { 00:55:46.490 "method": "accel_set_options", 00:55:46.490 "params": { 00:55:46.490 "buf_count": 2048, 00:55:46.490 "large_cache_size": 16, 00:55:46.490 "sequence_count": 2048, 00:55:46.490 "small_cache_size": 128, 00:55:46.490 "task_count": 2048 00:55:46.490 } 00:55:46.490 } 00:55:46.490 ] 00:55:46.490 }, 00:55:46.490 { 00:55:46.490 "subsystem": "bdev", 00:55:46.490 "config": [ 00:55:46.490 { 00:55:46.490 "method": "bdev_set_options", 00:55:46.490 "params": { 00:55:46.490 "bdev_auto_examine": true, 00:55:46.490 "bdev_io_cache_size": 256, 00:55:46.490 "bdev_io_pool_size": 65535, 00:55:46.490 "iobuf_large_cache_size": 16, 00:55:46.490 "iobuf_small_cache_size": 128 00:55:46.490 } 00:55:46.490 }, 00:55:46.490 { 00:55:46.490 "method": "bdev_raid_set_options", 00:55:46.490 "params": { 00:55:46.490 "process_window_size_kb": 1024 00:55:46.490 } 00:55:46.490 }, 00:55:46.490 { 00:55:46.490 "method": "bdev_iscsi_set_options", 00:55:46.490 "params": { 00:55:46.490 "timeout_sec": 30 00:55:46.490 } 00:55:46.490 }, 00:55:46.490 { 00:55:46.490 "method": "bdev_nvme_set_options", 00:55:46.490 "params": { 00:55:46.490 "action_on_timeout": "none", 00:55:46.490 "allow_accel_sequence": false, 00:55:46.490 "arbitration_burst": 0, 00:55:46.490 "bdev_retry_count": 3, 00:55:46.490 "ctrlr_loss_timeout_sec": 0, 00:55:46.490 "delay_cmd_submit": true, 00:55:46.490 "dhchap_dhgroups": [ 00:55:46.490 "null", 00:55:46.490 "ffdhe2048", 00:55:46.490 "ffdhe3072", 00:55:46.490 "ffdhe4096", 00:55:46.490 "ffdhe6144", 00:55:46.490 "ffdhe8192" 00:55:46.490 ], 00:55:46.490 "dhchap_digests": [ 00:55:46.490 "sha256", 00:55:46.490 "sha384", 00:55:46.490 "sha512" 00:55:46.490 ], 00:55:46.490 "disable_auto_failback": false, 00:55:46.490 "fast_io_fail_timeout_sec": 0, 00:55:46.490 "generate_uuids": false, 00:55:46.490 "high_priority_weight": 0, 00:55:46.490 "io_path_stat": false, 00:55:46.490 "io_queue_requests": 512, 00:55:46.490 "keep_alive_timeout_ms": 10000, 00:55:46.490 "low_priority_weight": 0, 00:55:46.490 "medium_priority_weight": 0, 00:55:46.490 "nvme_adminq_poll_period_us": 10000, 00:55:46.490 "nvme_error_stat": false, 00:55:46.490 "nvme_ioq_poll_period_us": 0, 00:55:46.490 "rdma_cm_event_timeout_ms": 0, 00:55:46.490 "rdma_max_cq_size": 0, 00:55:46.490 "rdma_srq_size": 0, 00:55:46.490 "reconnect_delay_sec": 0, 00:55:46.490 "timeout_admin_us": 0, 00:55:46.490 "timeout_us": 0, 00:55:46.490 "transport_ack_timeout": 0, 00:55:46.490 "transport_retry_count": 4, 00:55:46.490 "transport_tos": 0 00:55:46.490 } 00:55:46.490 }, 00:55:46.490 { 00:55:46.490 "method": "bdev_nvme_attach_controller", 00:55:46.490 "params": { 00:55:46.490 "adrfam": "IPv4", 00:55:46.490 "ctrlr_loss_timeout_sec": 0, 00:55:46.490 "ddgst": false, 00:55:46.490 "fast_io_fail_timeout_sec": 0, 00:55:46.490 "hdgst": false, 00:55:46.490 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:55:46.490 "name": "nvme0", 00:55:46.490 "prchk_guard": false, 00:55:46.490 "prchk_reftag": false, 00:55:46.490 "psk": "key0", 00:55:46.490 "reconnect_delay_sec": 0, 00:55:46.490 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:55:46.490 "traddr": "127.0.0.1", 00:55:46.490 "trsvcid": "4420", 00:55:46.490 "trtype": "TCP" 00:55:46.490 } 00:55:46.490 }, 00:55:46.490 { 00:55:46.490 "method": "bdev_nvme_set_hotplug", 00:55:46.490 "params": { 00:55:46.490 "enable": false, 00:55:46.490 "period_us": 100000 00:55:46.490 } 00:55:46.490 }, 00:55:46.490 { 00:55:46.490 "method": "bdev_wait_for_examine" 00:55:46.490 } 00:55:46.490 ] 00:55:46.490 }, 00:55:46.490 { 00:55:46.490 "subsystem": "nbd", 00:55:46.490 "config": [] 00:55:46.490 } 00:55:46.490 ] 00:55:46.490 }' 00:55:46.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:55:46.490 15:09:05 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:55:46.490 15:09:05 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:55:46.490 15:09:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:55:46.490 [2024-07-22 15:09:06.009740] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:55:46.491 [2024-07-22 15:09:06.009839] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118379 ] 00:55:46.750 [2024-07-22 15:09:06.154138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:55:46.750 [2024-07-22 15:09:06.209118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:55:46.750 [2024-07-22 15:09:06.367006] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:55:47.320 15:09:06 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:55:47.320 15:09:06 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:55:47.579 15:09:06 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:55:47.579 15:09:06 keyring_file -- keyring/file.sh@120 -- # jq length 00:55:47.579 15:09:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:55:47.579 15:09:07 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:55:47.579 15:09:07 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:55:47.579 15:09:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:55:47.579 15:09:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:55:47.579 15:09:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:55:47.579 15:09:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:55:47.579 15:09:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:55:47.839 15:09:07 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:55:47.839 15:09:07 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:55:47.839 15:09:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:55:47.839 15:09:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:55:47.839 15:09:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:55:47.839 15:09:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:55:47.839 15:09:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:55:48.099 15:09:07 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:55:48.099 15:09:07 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:55:48.099 15:09:07 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:55:48.099 15:09:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:55:48.358 15:09:07 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:55:48.358 15:09:07 keyring_file -- keyring/file.sh@1 -- # cleanup 00:55:48.358 15:09:07 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.IELa8NmhSG /tmp/tmp.kbKvwqwKqa 00:55:48.358 15:09:07 keyring_file -- keyring/file.sh@20 -- # killprocess 118379 00:55:48.358 15:09:07 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 118379 ']' 00:55:48.358 15:09:07 keyring_file -- common/autotest_common.sh@950 -- # kill -0 118379 00:55:48.358 15:09:07 keyring_file -- common/autotest_common.sh@951 -- # uname 00:55:48.358 15:09:07 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:55:48.358 15:09:07 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 118379 00:55:48.358 15:09:07 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:55:48.358 15:09:07 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:55:48.358 15:09:07 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 118379' 00:55:48.358 killing process with pid 118379 00:55:48.358 15:09:07 keyring_file -- common/autotest_common.sh@965 -- # kill 118379 00:55:48.358 Received shutdown signal, test time was about 1.000000 seconds 00:55:48.358 00:55:48.358 Latency(us) 00:55:48.358 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:55:48.358 =================================================================================================================== 00:55:48.358 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:55:48.358 15:09:07 keyring_file -- common/autotest_common.sh@970 -- # wait 118379 00:55:48.617 15:09:08 keyring_file -- keyring/file.sh@21 -- # killprocess 117882 00:55:48.617 15:09:08 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 117882 ']' 00:55:48.617 15:09:08 keyring_file -- common/autotest_common.sh@950 -- # kill -0 117882 00:55:48.617 15:09:08 keyring_file -- common/autotest_common.sh@951 -- # uname 00:55:48.617 15:09:08 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:55:48.617 15:09:08 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 117882 00:55:48.617 15:09:08 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:55:48.617 15:09:08 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:55:48.617 killing process with pid 117882 00:55:48.617 15:09:08 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 117882' 00:55:48.617 15:09:08 keyring_file -- common/autotest_common.sh@965 -- # kill 117882 00:55:48.617 [2024-07-22 15:09:08.170583] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:55:48.617 15:09:08 keyring_file -- common/autotest_common.sh@970 -- # wait 117882 00:55:48.877 00:55:48.877 real 0m14.654s 00:55:48.877 user 0m36.131s 00:55:48.877 sys 0m3.143s 00:55:48.877 15:09:08 keyring_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:55:48.877 15:09:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:55:48.877 ************************************ 00:55:48.877 END TEST keyring_file 00:55:48.877 ************************************ 00:55:49.140 15:09:08 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:55:49.140 15:09:08 -- spdk/autotest.sh@297 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:55:49.140 15:09:08 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:55:49.140 15:09:08 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:55:49.140 15:09:08 -- common/autotest_common.sh@10 -- # set +x 00:55:49.140 ************************************ 00:55:49.140 START TEST keyring_linux 00:55:49.140 ************************************ 00:55:49.140 15:09:08 keyring_linux -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:55:49.140 * Looking for test storage... 00:55:49.140 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:55:49.140 15:09:08 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:55:49.140 15:09:08 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:55:49.140 15:09:08 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:55:49.140 15:09:08 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:55:49.140 15:09:08 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:55:49.140 15:09:08 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:55:49.140 15:09:08 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:55:49.140 15:09:08 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:55:49.140 15:09:08 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:55:49.140 15:09:08 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:55:49.140 15:09:08 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:55:49.140 15:09:08 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:55:49.140 15:09:08 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:55:49.140 15:09:08 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:55:49.140 15:09:08 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=03d0feaf-0e67-45cf-98ce-f4c5b5cfc4f5 00:55:49.140 15:09:08 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:55:49.140 15:09:08 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:55:49.140 15:09:08 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:55:49.140 15:09:08 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:55:49.140 15:09:08 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:55:49.140 15:09:08 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:55:49.140 15:09:08 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:55:49.140 15:09:08 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:55:49.140 15:09:08 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:49.140 15:09:08 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:49.140 15:09:08 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:49.141 15:09:08 keyring_linux -- paths/export.sh@5 -- # export PATH 00:55:49.141 15:09:08 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:55:49.141 15:09:08 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:55:49.141 15:09:08 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:55:49.141 15:09:08 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:55:49.141 15:09:08 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:55:49.141 15:09:08 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:55:49.141 15:09:08 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:55:49.141 15:09:08 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:55:49.141 15:09:08 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:55:49.141 15:09:08 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:55:49.141 15:09:08 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:55:49.141 15:09:08 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:55:49.141 15:09:08 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:55:49.141 15:09:08 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:55:49.141 15:09:08 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:55:49.141 15:09:08 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:55:49.141 15:09:08 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:55:49.141 15:09:08 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:55:49.141 15:09:08 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:55:49.141 15:09:08 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:55:49.141 15:09:08 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:55:49.141 15:09:08 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:55:49.141 15:09:08 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:55:49.141 15:09:08 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:55:49.141 15:09:08 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:55:49.141 15:09:08 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:55:49.141 15:09:08 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:55:49.141 15:09:08 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:55:49.141 15:09:08 keyring_linux -- nvmf/common.sh@705 -- # python - 00:55:49.141 15:09:08 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:55:49.141 /tmp/:spdk-test:key0 00:55:49.141 15:09:08 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:55:49.141 15:09:08 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:55:49.141 15:09:08 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:55:49.141 15:09:08 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:55:49.141 15:09:08 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:55:49.141 15:09:08 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:55:49.141 15:09:08 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:55:49.141 15:09:08 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:55:49.141 15:09:08 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:55:49.141 15:09:08 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:55:49.141 15:09:08 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:55:49.141 15:09:08 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:55:49.141 15:09:08 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:55:49.141 15:09:08 keyring_linux -- nvmf/common.sh@705 -- # python - 00:55:49.400 15:09:08 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:55:49.400 /tmp/:spdk-test:key1 00:55:49.400 15:09:08 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:55:49.400 15:09:08 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:55:49.400 15:09:08 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=118523 00:55:49.400 15:09:08 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 118523 00:55:49.400 15:09:08 keyring_linux -- common/autotest_common.sh@827 -- # '[' -z 118523 ']' 00:55:49.400 15:09:08 keyring_linux -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:55:49.400 15:09:08 keyring_linux -- common/autotest_common.sh@832 -- # local max_retries=100 00:55:49.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:55:49.400 15:09:08 keyring_linux -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:55:49.400 15:09:08 keyring_linux -- common/autotest_common.sh@836 -- # xtrace_disable 00:55:49.400 15:09:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:55:49.400 [2024-07-22 15:09:08.851841] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:55:49.400 [2024-07-22 15:09:08.851962] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118523 ] 00:55:49.400 [2024-07-22 15:09:08.995512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:55:49.658 [2024-07-22 15:09:09.051483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:55:50.226 15:09:09 keyring_linux -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:55:50.226 15:09:09 keyring_linux -- common/autotest_common.sh@860 -- # return 0 00:55:50.226 15:09:09 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:55:50.226 15:09:09 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:55:50.226 15:09:09 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:55:50.226 [2024-07-22 15:09:09.794190] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:55:50.226 null0 00:55:50.226 [2024-07-22 15:09:09.826091] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:55:50.226 [2024-07-22 15:09:09.826334] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:55:50.226 15:09:09 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:55:50.226 15:09:09 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:55:50.226 242000329 00:55:50.226 15:09:09 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:55:50.485 283515235 00:55:50.485 15:09:09 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=118559 00:55:50.485 15:09:09 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:55:50.485 15:09:09 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 118559 /var/tmp/bperf.sock 00:55:50.485 15:09:09 keyring_linux -- common/autotest_common.sh@827 -- # '[' -z 118559 ']' 00:55:50.485 15:09:09 keyring_linux -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:55:50.485 15:09:09 keyring_linux -- common/autotest_common.sh@832 -- # local max_retries=100 00:55:50.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:55:50.485 15:09:09 keyring_linux -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:55:50.485 15:09:09 keyring_linux -- common/autotest_common.sh@836 -- # xtrace_disable 00:55:50.485 15:09:09 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:55:50.485 [2024-07-22 15:09:09.909453] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:55:50.485 [2024-07-22 15:09:09.909544] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118559 ] 00:55:50.485 [2024-07-22 15:09:10.048366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:55:50.485 [2024-07-22 15:09:10.104858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:55:51.419 15:09:10 keyring_linux -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:55:51.419 15:09:10 keyring_linux -- common/autotest_common.sh@860 -- # return 0 00:55:51.419 15:09:10 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:55:51.419 15:09:10 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:55:51.678 15:09:11 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:55:51.678 15:09:11 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:55:51.937 15:09:11 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:55:51.937 15:09:11 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:55:52.195 [2024-07-22 15:09:11.578223] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:55:52.195 nvme0n1 00:55:52.195 15:09:11 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:55:52.195 15:09:11 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:55:52.195 15:09:11 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:55:52.195 15:09:11 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:55:52.195 15:09:11 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:55:52.195 15:09:11 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:55:52.452 15:09:11 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:55:52.452 15:09:11 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:55:52.452 15:09:11 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:55:52.452 15:09:11 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:55:52.452 15:09:11 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:55:52.452 15:09:11 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:55:52.452 15:09:11 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:55:52.709 15:09:12 keyring_linux -- keyring/linux.sh@25 -- # sn=242000329 00:55:52.709 15:09:12 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:55:52.709 15:09:12 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:55:52.709 15:09:12 keyring_linux -- keyring/linux.sh@26 -- # [[ 242000329 == \2\4\2\0\0\0\3\2\9 ]] 00:55:52.709 15:09:12 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 242000329 00:55:52.709 15:09:12 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:55:52.709 15:09:12 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:55:52.968 Running I/O for 1 seconds... 00:55:53.901 00:55:53.901 Latency(us) 00:55:53.901 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:55:53.902 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:55:53.902 nvme0n1 : 1.01 15751.29 61.53 0.00 0.00 8089.15 6210.18 23123.62 00:55:53.902 =================================================================================================================== 00:55:53.902 Total : 15751.29 61.53 0.00 0.00 8089.15 6210.18 23123.62 00:55:53.902 0 00:55:53.902 15:09:13 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:55:53.902 15:09:13 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:55:54.160 15:09:13 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:55:54.160 15:09:13 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:55:54.160 15:09:13 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:55:54.160 15:09:13 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:55:54.160 15:09:13 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:55:54.160 15:09:13 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:55:54.418 15:09:13 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:55:54.418 15:09:13 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:55:54.418 15:09:13 keyring_linux -- keyring/linux.sh@23 -- # return 00:55:54.418 15:09:13 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:55:54.418 15:09:13 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:55:54.418 15:09:13 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:55:54.418 15:09:13 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:55:54.418 15:09:13 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:55:54.418 15:09:13 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:55:54.418 15:09:13 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:55:54.418 15:09:13 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:55:54.418 15:09:13 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:55:54.678 [2024-07-22 15:09:14.190535] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:55:54.678 [2024-07-22 15:09:14.191027] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa2610 (107): Transport endpoint is not connected 00:55:54.678 [2024-07-22 15:09:14.192011] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa2610 (9): Bad file descriptor 00:55:54.678 [2024-07-22 15:09:14.193008] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:55:54.678 [2024-07-22 15:09:14.193035] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:55:54.678 [2024-07-22 15:09:14.193043] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:55:54.678 2024/07/22 15:09:14 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 psk::spdk-test:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:55:54.678 request: 00:55:54.678 { 00:55:54.678 "method": "bdev_nvme_attach_controller", 00:55:54.678 "params": { 00:55:54.678 "name": "nvme0", 00:55:54.678 "trtype": "tcp", 00:55:54.678 "traddr": "127.0.0.1", 00:55:54.678 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:55:54.678 "adrfam": "ipv4", 00:55:54.678 "trsvcid": "4420", 00:55:54.678 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:55:54.678 "psk": ":spdk-test:key1" 00:55:54.678 } 00:55:54.678 } 00:55:54.678 Got JSON-RPC error response 00:55:54.678 GoRPCClient: error on JSON-RPC call 00:55:54.678 15:09:14 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:55:54.678 15:09:14 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:55:54.678 15:09:14 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:55:54.678 15:09:14 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:55:54.678 15:09:14 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:55:54.678 15:09:14 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:55:54.678 15:09:14 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:55:54.678 15:09:14 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:55:54.678 15:09:14 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:55:54.678 15:09:14 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:55:54.678 15:09:14 keyring_linux -- keyring/linux.sh@33 -- # sn=242000329 00:55:54.678 15:09:14 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 242000329 00:55:54.678 1 links removed 00:55:54.678 15:09:14 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:55:54.678 15:09:14 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:55:54.678 15:09:14 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:55:54.678 15:09:14 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:55:54.678 15:09:14 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:55:54.678 15:09:14 keyring_linux -- keyring/linux.sh@33 -- # sn=283515235 00:55:54.678 15:09:14 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 283515235 00:55:54.678 1 links removed 00:55:54.678 15:09:14 keyring_linux -- keyring/linux.sh@41 -- # killprocess 118559 00:55:54.678 15:09:14 keyring_linux -- common/autotest_common.sh@946 -- # '[' -z 118559 ']' 00:55:54.678 15:09:14 keyring_linux -- common/autotest_common.sh@950 -- # kill -0 118559 00:55:54.678 15:09:14 keyring_linux -- common/autotest_common.sh@951 -- # uname 00:55:54.678 15:09:14 keyring_linux -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:55:54.678 15:09:14 keyring_linux -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 118559 00:55:54.678 killing process with pid 118559 00:55:54.678 Received shutdown signal, test time was about 1.000000 seconds 00:55:54.678 00:55:54.678 Latency(us) 00:55:54.678 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:55:54.679 =================================================================================================================== 00:55:54.679 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:55:54.679 15:09:14 keyring_linux -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:55:54.679 15:09:14 keyring_linux -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:55:54.679 15:09:14 keyring_linux -- common/autotest_common.sh@964 -- # echo 'killing process with pid 118559' 00:55:54.679 15:09:14 keyring_linux -- common/autotest_common.sh@965 -- # kill 118559 00:55:54.679 15:09:14 keyring_linux -- common/autotest_common.sh@970 -- # wait 118559 00:55:54.938 15:09:14 keyring_linux -- keyring/linux.sh@42 -- # killprocess 118523 00:55:54.938 15:09:14 keyring_linux -- common/autotest_common.sh@946 -- # '[' -z 118523 ']' 00:55:54.938 15:09:14 keyring_linux -- common/autotest_common.sh@950 -- # kill -0 118523 00:55:54.938 15:09:14 keyring_linux -- common/autotest_common.sh@951 -- # uname 00:55:54.938 15:09:14 keyring_linux -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:55:54.938 15:09:14 keyring_linux -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 118523 00:55:54.938 killing process with pid 118523 00:55:54.938 15:09:14 keyring_linux -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:55:54.938 15:09:14 keyring_linux -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:55:54.938 15:09:14 keyring_linux -- common/autotest_common.sh@964 -- # echo 'killing process with pid 118523' 00:55:54.938 15:09:14 keyring_linux -- common/autotest_common.sh@965 -- # kill 118523 00:55:54.938 15:09:14 keyring_linux -- common/autotest_common.sh@970 -- # wait 118523 00:55:55.197 00:55:55.197 real 0m6.259s 00:55:55.197 user 0m12.194s 00:55:55.197 sys 0m1.546s 00:55:55.197 15:09:14 keyring_linux -- common/autotest_common.sh@1122 -- # xtrace_disable 00:55:55.197 15:09:14 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:55:55.197 ************************************ 00:55:55.197 END TEST keyring_linux 00:55:55.197 ************************************ 00:55:55.456 15:09:14 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:55:55.456 15:09:14 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:55:55.456 15:09:14 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:55:55.456 15:09:14 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:55:55.456 15:09:14 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:55:55.456 15:09:14 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:55:55.456 15:09:14 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:55:55.456 15:09:14 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:55:55.456 15:09:14 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:55:55.456 15:09:14 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:55:55.456 15:09:14 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:55:55.456 15:09:14 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:55:55.456 15:09:14 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:55:55.456 15:09:14 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:55:55.456 15:09:14 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:55:55.456 15:09:14 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:55:55.456 15:09:14 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:55:55.456 15:09:14 -- common/autotest_common.sh@720 -- # xtrace_disable 00:55:55.456 15:09:14 -- common/autotest_common.sh@10 -- # set +x 00:55:55.456 15:09:14 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:55:55.456 15:09:14 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:55:55.456 15:09:14 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:55:55.456 15:09:14 -- common/autotest_common.sh@10 -- # set +x 00:55:57.361 INFO: APP EXITING 00:55:57.361 INFO: killing all VMs 00:55:57.361 INFO: killing vhost app 00:55:57.361 INFO: EXIT DONE 00:55:57.929 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:55:58.188 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:55:58.188 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:55:58.757 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:55:59.028 Cleaning 00:55:59.028 Removing: /var/run/dpdk/spdk0/config 00:55:59.028 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:55:59.028 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:55:59.028 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:55:59.028 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:55:59.028 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:55:59.028 Removing: /var/run/dpdk/spdk0/hugepage_info 00:55:59.028 Removing: /var/run/dpdk/spdk1/config 00:55:59.028 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:55:59.028 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:55:59.028 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:55:59.028 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:55:59.028 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:55:59.028 Removing: /var/run/dpdk/spdk1/hugepage_info 00:55:59.028 Removing: /var/run/dpdk/spdk2/config 00:55:59.028 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:55:59.028 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:55:59.028 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:55:59.028 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:55:59.028 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:55:59.028 Removing: /var/run/dpdk/spdk2/hugepage_info 00:55:59.028 Removing: /var/run/dpdk/spdk3/config 00:55:59.029 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:55:59.029 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:55:59.029 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:55:59.029 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:55:59.029 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:55:59.029 Removing: /var/run/dpdk/spdk3/hugepage_info 00:55:59.029 Removing: /var/run/dpdk/spdk4/config 00:55:59.029 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:55:59.029 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:55:59.029 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:55:59.029 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:55:59.029 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:55:59.029 Removing: /var/run/dpdk/spdk4/hugepage_info 00:55:59.029 Removing: /dev/shm/nvmf_trace.0 00:55:59.029 Removing: /dev/shm/spdk_tgt_trace.pid72719 00:55:59.029 Removing: /var/run/dpdk/spdk0 00:55:59.029 Removing: /var/run/dpdk/spdk1 00:55:59.029 Removing: /var/run/dpdk/spdk2 00:55:59.029 Removing: /var/run/dpdk/spdk3 00:55:59.029 Removing: /var/run/dpdk/spdk4 00:55:59.029 Removing: /var/run/dpdk/spdk_pid100106 00:55:59.029 Removing: /var/run/dpdk/spdk_pid100445 00:55:59.029 Removing: /var/run/dpdk/spdk_pid100818 00:55:59.029 Removing: /var/run/dpdk/spdk_pid100820 00:55:59.029 Removing: /var/run/dpdk/spdk_pid103078 00:55:59.029 Removing: /var/run/dpdk/spdk_pid103385 00:55:59.029 Removing: /var/run/dpdk/spdk_pid103874 00:55:59.029 Removing: /var/run/dpdk/spdk_pid103880 00:55:59.029 Removing: /var/run/dpdk/spdk_pid104212 00:55:59.029 Removing: /var/run/dpdk/spdk_pid104232 00:55:59.029 Removing: /var/run/dpdk/spdk_pid104250 00:55:59.029 Removing: /var/run/dpdk/spdk_pid104275 00:55:59.029 Removing: /var/run/dpdk/spdk_pid104287 00:55:59.029 Removing: /var/run/dpdk/spdk_pid104430 00:55:59.029 Removing: /var/run/dpdk/spdk_pid104432 00:55:59.029 Removing: /var/run/dpdk/spdk_pid104540 00:55:59.303 Removing: /var/run/dpdk/spdk_pid104542 00:55:59.303 Removing: /var/run/dpdk/spdk_pid104650 00:55:59.303 Removing: /var/run/dpdk/spdk_pid104652 00:55:59.303 Removing: /var/run/dpdk/spdk_pid105139 00:55:59.303 Removing: /var/run/dpdk/spdk_pid105182 00:55:59.303 Removing: /var/run/dpdk/spdk_pid105333 00:55:59.303 Removing: /var/run/dpdk/spdk_pid105448 00:55:59.303 Removing: /var/run/dpdk/spdk_pid105851 00:55:59.303 Removing: /var/run/dpdk/spdk_pid106095 00:55:59.303 Removing: /var/run/dpdk/spdk_pid106577 00:55:59.303 Removing: /var/run/dpdk/spdk_pid107157 00:55:59.303 Removing: /var/run/dpdk/spdk_pid108462 00:55:59.303 Removing: /var/run/dpdk/spdk_pid109054 00:55:59.303 Removing: /var/run/dpdk/spdk_pid109060 00:55:59.303 Removing: /var/run/dpdk/spdk_pid110966 00:55:59.303 Removing: /var/run/dpdk/spdk_pid111051 00:55:59.303 Removing: /var/run/dpdk/spdk_pid111141 00:55:59.303 Removing: /var/run/dpdk/spdk_pid111226 00:55:59.303 Removing: /var/run/dpdk/spdk_pid111384 00:55:59.303 Removing: /var/run/dpdk/spdk_pid111475 00:55:59.303 Removing: /var/run/dpdk/spdk_pid111560 00:55:59.303 Removing: /var/run/dpdk/spdk_pid111645 00:55:59.303 Removing: /var/run/dpdk/spdk_pid111991 00:55:59.303 Removing: /var/run/dpdk/spdk_pid112679 00:55:59.303 Removing: /var/run/dpdk/spdk_pid114021 00:55:59.303 Removing: /var/run/dpdk/spdk_pid114221 00:55:59.303 Removing: /var/run/dpdk/spdk_pid114506 00:55:59.303 Removing: /var/run/dpdk/spdk_pid114796 00:55:59.303 Removing: /var/run/dpdk/spdk_pid115353 00:55:59.303 Removing: /var/run/dpdk/spdk_pid115358 00:55:59.303 Removing: /var/run/dpdk/spdk_pid115716 00:55:59.303 Removing: /var/run/dpdk/spdk_pid115875 00:55:59.303 Removing: /var/run/dpdk/spdk_pid116026 00:55:59.303 Removing: /var/run/dpdk/spdk_pid116123 00:55:59.303 Removing: /var/run/dpdk/spdk_pid116274 00:55:59.303 Removing: /var/run/dpdk/spdk_pid116383 00:55:59.303 Removing: /var/run/dpdk/spdk_pid117055 00:55:59.303 Removing: /var/run/dpdk/spdk_pid117090 00:55:59.303 Removing: /var/run/dpdk/spdk_pid117121 00:55:59.303 Removing: /var/run/dpdk/spdk_pid117387 00:55:59.303 Removing: /var/run/dpdk/spdk_pid117417 00:55:59.303 Removing: /var/run/dpdk/spdk_pid117452 00:55:59.303 Removing: /var/run/dpdk/spdk_pid117882 00:55:59.303 Removing: /var/run/dpdk/spdk_pid117913 00:55:59.303 Removing: /var/run/dpdk/spdk_pid118379 00:55:59.303 Removing: /var/run/dpdk/spdk_pid118523 00:55:59.303 Removing: /var/run/dpdk/spdk_pid118559 00:55:59.303 Removing: /var/run/dpdk/spdk_pid72578 00:55:59.303 Removing: /var/run/dpdk/spdk_pid72719 00:55:59.303 Removing: /var/run/dpdk/spdk_pid72980 00:55:59.303 Removing: /var/run/dpdk/spdk_pid73067 00:55:59.303 Removing: /var/run/dpdk/spdk_pid73106 00:55:59.303 Removing: /var/run/dpdk/spdk_pid73216 00:55:59.303 Removing: /var/run/dpdk/spdk_pid73246 00:55:59.303 Removing: /var/run/dpdk/spdk_pid73364 00:55:59.303 Removing: /var/run/dpdk/spdk_pid73633 00:55:59.303 Removing: /var/run/dpdk/spdk_pid73803 00:55:59.303 Removing: /var/run/dpdk/spdk_pid73883 00:55:59.303 Removing: /var/run/dpdk/spdk_pid73974 00:55:59.303 Removing: /var/run/dpdk/spdk_pid74059 00:55:59.303 Removing: /var/run/dpdk/spdk_pid74092 00:55:59.303 Removing: /var/run/dpdk/spdk_pid74133 00:55:59.303 Removing: /var/run/dpdk/spdk_pid74189 00:55:59.303 Removing: /var/run/dpdk/spdk_pid74302 00:55:59.303 Removing: /var/run/dpdk/spdk_pid74918 00:55:59.303 Removing: /var/run/dpdk/spdk_pid74976 00:55:59.303 Removing: /var/run/dpdk/spdk_pid75041 00:55:59.303 Removing: /var/run/dpdk/spdk_pid75069 00:55:59.303 Removing: /var/run/dpdk/spdk_pid75137 00:55:59.303 Removing: /var/run/dpdk/spdk_pid75165 00:55:59.303 Removing: /var/run/dpdk/spdk_pid75238 00:55:59.303 Removing: /var/run/dpdk/spdk_pid75261 00:55:59.303 Removing: /var/run/dpdk/spdk_pid75318 00:55:59.561 Removing: /var/run/dpdk/spdk_pid75348 00:55:59.561 Removing: /var/run/dpdk/spdk_pid75394 00:55:59.561 Removing: /var/run/dpdk/spdk_pid75424 00:55:59.561 Removing: /var/run/dpdk/spdk_pid75567 00:55:59.561 Removing: /var/run/dpdk/spdk_pid75597 00:55:59.561 Removing: /var/run/dpdk/spdk_pid75678 00:55:59.561 Removing: /var/run/dpdk/spdk_pid75742 00:55:59.561 Removing: /var/run/dpdk/spdk_pid75767 00:55:59.561 Removing: /var/run/dpdk/spdk_pid75825 00:55:59.561 Removing: /var/run/dpdk/spdk_pid75860 00:55:59.561 Removing: /var/run/dpdk/spdk_pid75894 00:55:59.561 Removing: /var/run/dpdk/spdk_pid75923 00:55:59.561 Removing: /var/run/dpdk/spdk_pid75958 00:55:59.561 Removing: /var/run/dpdk/spdk_pid75992 00:55:59.561 Removing: /var/run/dpdk/spdk_pid76027 00:55:59.561 Removing: /var/run/dpdk/spdk_pid76060 00:55:59.561 Removing: /var/run/dpdk/spdk_pid76096 00:55:59.561 Removing: /var/run/dpdk/spdk_pid76125 00:55:59.561 Removing: /var/run/dpdk/spdk_pid76159 00:55:59.561 Removing: /var/run/dpdk/spdk_pid76194 00:55:59.561 Removing: /var/run/dpdk/spdk_pid76223 00:55:59.561 Removing: /var/run/dpdk/spdk_pid76263 00:55:59.561 Removing: /var/run/dpdk/spdk_pid76292 00:55:59.561 Removing: /var/run/dpdk/spdk_pid76331 00:55:59.561 Removing: /var/run/dpdk/spdk_pid76361 00:55:59.561 Removing: /var/run/dpdk/spdk_pid76393 00:55:59.561 Removing: /var/run/dpdk/spdk_pid76436 00:55:59.561 Removing: /var/run/dpdk/spdk_pid76465 00:55:59.561 Removing: /var/run/dpdk/spdk_pid76506 00:55:59.561 Removing: /var/run/dpdk/spdk_pid76565 00:55:59.561 Removing: /var/run/dpdk/spdk_pid76676 00:55:59.561 Removing: /var/run/dpdk/spdk_pid77090 00:55:59.561 Removing: /var/run/dpdk/spdk_pid83796 00:55:59.561 Removing: /var/run/dpdk/spdk_pid84137 00:55:59.561 Removing: /var/run/dpdk/spdk_pid86571 00:55:59.561 Removing: /var/run/dpdk/spdk_pid86956 00:55:59.561 Removing: /var/run/dpdk/spdk_pid87196 00:55:59.561 Removing: /var/run/dpdk/spdk_pid87237 00:55:59.561 Removing: /var/run/dpdk/spdk_pid88090 00:55:59.561 Removing: /var/run/dpdk/spdk_pid88140 00:55:59.561 Removing: /var/run/dpdk/spdk_pid88486 00:55:59.561 Removing: /var/run/dpdk/spdk_pid89006 00:55:59.561 Removing: /var/run/dpdk/spdk_pid89428 00:55:59.561 Removing: /var/run/dpdk/spdk_pid90374 00:55:59.561 Removing: /var/run/dpdk/spdk_pid91343 00:55:59.561 Removing: /var/run/dpdk/spdk_pid91460 00:55:59.561 Removing: /var/run/dpdk/spdk_pid91522 00:55:59.561 Removing: /var/run/dpdk/spdk_pid92965 00:55:59.561 Removing: /var/run/dpdk/spdk_pid93189 00:55:59.561 Removing: /var/run/dpdk/spdk_pid98142 00:55:59.561 Removing: /var/run/dpdk/spdk_pid98562 00:55:59.561 Removing: /var/run/dpdk/spdk_pid98670 00:55:59.561 Removing: /var/run/dpdk/spdk_pid98818 00:55:59.561 Removing: /var/run/dpdk/spdk_pid98864 00:55:59.561 Removing: /var/run/dpdk/spdk_pid98904 00:55:59.561 Removing: /var/run/dpdk/spdk_pid98944 00:55:59.561 Removing: /var/run/dpdk/spdk_pid99097 00:55:59.561 Removing: /var/run/dpdk/spdk_pid99245 00:55:59.561 Removing: /var/run/dpdk/spdk_pid99499 00:55:59.561 Removing: /var/run/dpdk/spdk_pid99616 00:55:59.561 Removing: /var/run/dpdk/spdk_pid99858 00:55:59.561 Removing: /var/run/dpdk/spdk_pid99978 00:55:59.561 Clean 00:55:59.819 15:09:19 -- common/autotest_common.sh@1447 -- # return 0 00:55:59.819 15:09:19 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:55:59.819 15:09:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:55:59.819 15:09:19 -- common/autotest_common.sh@10 -- # set +x 00:55:59.820 15:09:19 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:55:59.820 15:09:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:55:59.820 15:09:19 -- common/autotest_common.sh@10 -- # set +x 00:55:59.820 15:09:19 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:55:59.820 15:09:19 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:55:59.820 15:09:19 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:55:59.820 15:09:19 -- spdk/autotest.sh@391 -- # hash lcov 00:55:59.820 15:09:19 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:55:59.820 15:09:19 -- spdk/autotest.sh@393 -- # hostname 00:55:59.820 15:09:19 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:56:00.078 geninfo: WARNING: invalid characters removed from testname! 00:56:26.626 15:09:45 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:56:29.914 15:09:48 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:56:31.819 15:09:51 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:56:34.410 15:09:53 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:56:36.945 15:09:56 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:56:39.478 15:09:58 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:56:42.023 15:10:01 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:56:42.023 15:10:01 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:56:42.023 15:10:01 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:56:42.023 15:10:01 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:56:42.023 15:10:01 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:56:42.024 15:10:01 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:42.024 15:10:01 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:42.024 15:10:01 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:42.024 15:10:01 -- paths/export.sh@5 -- $ export PATH 00:56:42.024 15:10:01 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:56:42.024 15:10:01 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:56:42.024 15:10:01 -- common/autobuild_common.sh@437 -- $ date +%s 00:56:42.024 15:10:01 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1721661001.XXXXXX 00:56:42.024 15:10:01 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1721661001.eEnQ7h 00:56:42.024 15:10:01 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:56:42.024 15:10:01 -- common/autobuild_common.sh@443 -- $ '[' -n v22.11.4 ']' 00:56:42.024 15:10:01 -- common/autobuild_common.sh@444 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:56:42.024 15:10:01 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:56:42.024 15:10:01 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:56:42.024 15:10:01 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:56:42.024 15:10:01 -- common/autobuild_common.sh@453 -- $ get_config_params 00:56:42.024 15:10:01 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:56:42.024 15:10:01 -- common/autotest_common.sh@10 -- $ set +x 00:56:42.024 15:10:01 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:56:42.024 15:10:01 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:56:42.024 15:10:01 -- pm/common@17 -- $ local monitor 00:56:42.024 15:10:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:56:42.024 15:10:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:56:42.024 15:10:01 -- pm/common@25 -- $ sleep 1 00:56:42.024 15:10:01 -- pm/common@21 -- $ date +%s 00:56:42.024 15:10:01 -- pm/common@21 -- $ date +%s 00:56:42.024 15:10:01 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721661001 00:56:42.024 15:10:01 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721661001 00:56:42.282 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721661001_collect-vmstat.pm.log 00:56:42.282 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721661001_collect-cpu-load.pm.log 00:56:43.214 15:10:02 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:56:43.215 15:10:02 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:56:43.215 15:10:02 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:56:43.215 15:10:02 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:56:43.215 15:10:02 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:56:43.215 15:10:02 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:56:43.215 15:10:02 -- spdk/autopackage.sh@19 -- $ timing_finish 00:56:43.215 15:10:02 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:56:43.215 15:10:02 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:56:43.215 15:10:02 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:56:43.215 15:10:02 -- spdk/autopackage.sh@20 -- $ exit 0 00:56:43.215 15:10:02 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:56:43.215 15:10:02 -- pm/common@29 -- $ signal_monitor_resources TERM 00:56:43.215 15:10:02 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:56:43.215 15:10:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:56:43.215 15:10:02 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:56:43.215 15:10:02 -- pm/common@44 -- $ pid=120301 00:56:43.215 15:10:02 -- pm/common@50 -- $ kill -TERM 120301 00:56:43.215 15:10:02 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:56:43.215 15:10:02 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:56:43.215 15:10:02 -- pm/common@44 -- $ pid=120302 00:56:43.215 15:10:02 -- pm/common@50 -- $ kill -TERM 120302 00:56:43.215 + [[ -n 6069 ]] 00:56:43.215 + sudo kill 6069 00:56:43.224 [Pipeline] } 00:56:43.243 [Pipeline] // timeout 00:56:43.249 [Pipeline] } 00:56:43.265 [Pipeline] // stage 00:56:43.271 [Pipeline] } 00:56:43.289 [Pipeline] // catchError 00:56:43.299 [Pipeline] stage 00:56:43.302 [Pipeline] { (Stop VM) 00:56:43.317 [Pipeline] sh 00:56:43.625 + vagrant halt 00:56:46.932 ==> default: Halting domain... 00:56:55.056 [Pipeline] sh 00:56:55.336 + vagrant destroy -f 00:56:58.626 ==> default: Removing domain... 00:56:58.639 [Pipeline] sh 00:56:58.926 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:56:58.937 [Pipeline] } 00:56:58.957 [Pipeline] // stage 00:56:58.962 [Pipeline] } 00:56:58.979 [Pipeline] // dir 00:56:58.986 [Pipeline] } 00:56:59.005 [Pipeline] // wrap 00:56:59.013 [Pipeline] } 00:56:59.032 [Pipeline] // catchError 00:56:59.044 [Pipeline] stage 00:56:59.047 [Pipeline] { (Epilogue) 00:56:59.063 [Pipeline] sh 00:56:59.399 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:57:05.983 [Pipeline] catchError 00:57:05.986 [Pipeline] { 00:57:06.002 [Pipeline] sh 00:57:06.289 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:57:06.289 Artifacts sizes are good 00:57:06.299 [Pipeline] } 00:57:06.317 [Pipeline] // catchError 00:57:06.328 [Pipeline] archiveArtifacts 00:57:06.336 Archiving artifacts 00:57:06.517 [Pipeline] cleanWs 00:57:06.531 [WS-CLEANUP] Deleting project workspace... 00:57:06.532 [WS-CLEANUP] Deferred wipeout is used... 00:57:06.540 [WS-CLEANUP] done 00:57:06.542 [Pipeline] } 00:57:06.564 [Pipeline] // stage 00:57:06.570 [Pipeline] } 00:57:06.590 [Pipeline] // node 00:57:06.597 [Pipeline] End of Pipeline 00:57:06.640 Finished: SUCCESS